You didn't do any new video? Bro, I just found your channel. And I swear this is the easiest to understand compared to any machine learning channel. it's very straightforward and educating. please know by doing this you help many people and next future generation to study. please keep it up! i'm rooting for you!
Thanks for all your wonderful videos you have been posting. I really enjoy them, but been a newbie in Machine learning it has increased my knowledge and understanding. Pls can you make a video on NLP using RNN (GRU model) to predict sentiment. Thanks from UK
PLEASE REPLY FAST Hello the video and explanation are great but I'm getting an error: AttributeError: 'Sequential' object has no attribute 'predict_classes'
hi sir, this video was very helpful. I am not understanding the last part. the def function part Like am getting error: "y_predict = model.predict_classes(encoded)" in this line as ' AttributeError: 'Sequential' object has no attribute 'predict_classes' '. could you pleas me getting this pleasee
the sequences array that I am getting is 1 d instead of 2d. how to correct it? I've put length= 50 + 1 and followed the exact same steps. Please help. code: x,y=sequences[:,:-1],sequences[:,-1]. error: too many indices for array: array is 1-dimensional, but 2 were indexed
ValueError: Shapes (None, 13009) and (None, 100) are incompatible .........sir i m getting this error and it should be model.add(Dense(y.shape[1], activation='softmax')) instead of model.add(Dense(100,activation='softmax')) m i right or not sir......
Hola, el link donde viene el blog donde viene la explicación de LSTM for text prediciton, el link no permite visualizar la página, conoce el detalle? gracias!
Hey man ! First of all, thank you for this awesome video and model. But I got some problem here, with the 2D ndarray transformation. At 36:07. My X and Y are looking like this : ndarray with shape (71787,) And of course i got this error when I execute X, y = sequences[:, :-1], sequences[:, -1] IndexError: too many indices for array: array is 1-dimensional, but 2 were indexed Did you already have this error before ? My lines are identical to yours. Thank you
Hi, this video is also fall under the deep learning. I think you are referring CNN videos. Please watch it here ruclips.net/p/PLc2rvfiptPSR3iwFp1VHVJFK4yAMo0wuF
Thanks for this amazing tutorial. Follow the tutorial on my local machine and I get the following generated text: "sir and and and and and and and and and and and and and and and and and and and and and and and and and ...." Not sure what I'm doing wrong. I copy paste your code and the same happends. Thanks!
Hello Sir, Thank you for this tutorial. The explanation is very informative. I have a doubt. While writing the function generate_text_sequence, why did you use 0 in tokenizer.texts_to_sequences([seed_text])[0]? Also,, why did you add 1 to the vocabulary size? Please help me understand this. Thank you!
It generates a 2d array so to make it in 1d I used [0]. Like this [[value]][0] -> [value] 1 added because in run time algorithms needs 1 extra memory slot for unknown words. Means size of known words plus 1 for all unknown words.
I am new to ML, This video is a great help in my learning. One thing, can you please tell how to make this offline, I can copy and run the code in my PC, but the issue is the model train every time. So, how to train once and use the model again for output generation?
@@KGPTalkie thank you very much. I am going to do 'Real word error detection and correction for Afaan Oromo using Deep learning approach 'for my final Msc thesis at bule hora university in Ethiopia .If you have time I will ask some some things later.
Hi Dude, thanks for the video. Can you please help me the below doubts: 1. vocab_size = len(tokenizer.word_index) + 1 -> Why we are taking plus one here? 2. model.add(Embedding(vocab_size, 50, input_length=seq_length)) - Why input dimension is "vocab_size" here? 3. encoded = tokenizer.texts_to_sequences([seed_text])[0] -> What does '[0]' denotes here?
1. Because indexing starts from 0. So total length is last index plus 1 2. That is the size of total words for which vectors will be generated internally 3. It created list of list. So [0] only list. Like [[some data]][0] is equal to [some data].
Hi, This is in regard to the code file which you requested for different topics.I request you to please get enrolled yourself and show your support and love to KGP Talkie. All the code files and video lectures have lifetime access with 30 Days money back Guarantee. Code and question-answer support are also available at Udemy. Code files of RUclips lectures will be also available once you register in this course. Please send an email to udemy@kgptalkie.com with your registration details of this course and a list of other code files that you want. I promise you to give FREE COUPONS for the next course on Deep Learning and ML. You can click on the link mentioned below and can get yourself enrolled!! bit.ly/udemy95off_kgptalkie New content is added at Udemy: 1. Animation Plot [2 lectures] 2. Python Coding in Mobile [5 lectures] 3. Complete EDA of Boston Dataset [20 lectures] What else we promise in this course 1. Kaggle data EDA 2. Text data EDA 3. More Animation Plot 4. More 3D plots 5. Figure Aesthetics and Decoration 6. Free coupons for next course 7. And so much more. Hurry up!!! Only for a limited time. Please email your details at udemy@kgptalkie.com for the FREE COUPONS of the next course.
@@KGPTalkie I did that too, now it retuns only one word instead 10/100 words. The last line of code "generate_text_seq(model, tokenizer, seq_length, seed_text, 100)"
hi lakshmi , thanks for the video, i was just following step by step in google colab but i am not getting an option to get additional RAM. could you help me here please
@@KGPTalkie Hello code line 79 @ 55:24 why we used [0] in the end? Do we want to grab only the first encoding of the word? And in pad sequences why we are using truncating as 'pre'?
Hey, Is this model can be used to generate Hindi lyrics ( which are converted into English) ?? I am building a project and wanted to know. If anybody knows about this please reply.
Hi, Source codes are being made available at kgptalkie.com Please keep an eye on it. NLP lessons will be uploaded in a week. You can find rest of codes there.
why you always use random state as 42 only why not other numbers and how can we decide that which number we need to choose as random state...what does it exactly shows..i mean its role...ive a lots of doubt regarding this random state number...thank you
Random state could be anything. Fixing to any number make sure that whatever random number is generated will be always same. ML starts with some random weights and settings inside and this random number make sure that those settings are always same when you rerun or reproduce this result. random state could be anything. this you can also see as seed number for random generator. I would suggest you to read about the random generator and seed number in computer programs. I have chosen 42 as it is default setting otherwise you can select anything.
@@KGPTalkie I've replaced the "seed_text = lines[12343]", but there is the output for "generate_text_seq(model, tokenizer, seq_length, seed_text, 10)" WARNING:tensorflow:From :8: Sequential.predict_classes (from tensorflow.python.keras.engine.sequential) is deprecated and will be removed after 2021-01-01. Instructions for updating: Please use instead:* `np.argmax(model.predict(x), axis=-1)`, if your model does multi-class classification (e.g. if it uses a `softmax` last-layer activation).* `(model.predict(x) > 0.5).astype("int32")`, if your model does binary classification (e.g. if it uses a `sigmoid` last-layer activation). ''
@@adityasutar790 The error literally tells you what to do. Just, instead of y_pred = model.predict_classes(encoded) , put: y_pred = np.argmax(model.predict(encoded), axis=-1)
@@KGPTalkie sir I have paper and code but I am facing problem in code . please help. bugtriage.mybluemix.net/ and please next video lecture on on this topic
at IN [78], please assign seed_text = lines[12343]
helo sir
if you put epoch 10 instead of 100, will training with the LSTM model come when you proceed to encode and decode?
Hii can you please explain how the embedding takes place in this algo. And which word embedding did yiu used.
He used a one hot vector to encode each of the words, each vector would be the size of all the unique words in the vocabulary
You didn't do any new video? Bro, I just found your channel. And I swear this is the easiest to understand compared to any machine learning channel. it's very straightforward and educating. please know by doing this you help many people and next future generation to study. please keep it up! i'm rooting for you!
Nice tutorial. Sir also upload the character based LSTM text generation, which handles the out of vocabulary words.
Thanks for watching and yeah sure.
My best channel and guider teacher have a very good and awesome lecture. I wait for your lecture anxiety
Thanks for watching ❤️ 😍
Just Great!!
Your Blogs are very good....
Thanks 😍
Good one. Thank you so much . I appreciate your easy-to-understand explanation..
Thank you 😊
Good tutorials. I wanted to just have a high-level understanding. But they are so interesting that I am going in details
props man, this is great content
Thanks for all your wonderful videos you have been posting. I really enjoy them, but been a newbie in Machine learning it has increased my knowledge and understanding. Pls can you make a video on NLP using RNN (GRU model) to predict sentiment. Thanks from UK
Your explanation and work are awesome, Laxmikant
Pls make a video on skipthoughts models
PLEASE REPLY FAST
Hello the video and explanation are great but I'm getting an error:
AttributeError: 'Sequential' object has no attribute 'predict_classes'
Can you make tutorial on BERT?
Hello Sir, Can you please provide the source code?
This video is really fantastic, thank you very much sir. Is it possible for you to post the code here so that we can use it to practice on our own.
Hello sir ur tutorial is very good but i have a doubt in generate_text function what is the value n_words which we used for iterating through?
Bro your voice is so relaxing, you put me to sleep.
😂
Very good explanation. Start making more videos on cool projects and videos on blogs.
Thanks for watching ❤️ 😍.
can you pls share the code?
hi sir, this video was very helpful.
I am not understanding the last part. the def function part
Like am getting error: "y_predict = model.predict_classes(encoded)" in this line as ' AttributeError: 'Sequential' object has no attribute 'predict_classes' '. could you pleas me getting this pleasee
the sequences array that I am getting is 1 d instead of 2d. how to correct it? I've put length= 50 + 1 and followed the exact same steps. Please help. code: x,y=sequences[:,:-1],sequences[:,-1]. error: too many indices for array: array is 1-dimensional, but 2 were indexed
ValueError: Shapes (None, 13009) and (None, 100) are incompatible .........sir i m getting this error and it should be model.add(Dense(y.shape[1], activation='softmax')) instead of model.add(Dense(100,activation='softmax')) m i right or not sir......
Hi Sir,
I am not able to access your blog and code . The link is failing. Could you please correct it.
Hola, el link donde viene el blog donde viene la explicación de LSTM for text prediciton, el link no permite visualizar la página, conoce el detalle? gracias!
sir second last line main n_words not defined ara h
please help me out
start a course on OpenCV please!
Thanks for watching. I m preparing drafts for the same.
phle seed_text ara tha but baad m apka text dekha comment m fir recorrect krdia...but n_words k liye kya kre
Which video should I watch to learn automatic questions generation from paragraph in a text file.
Hey man !
First of all, thank you for this awesome video and model.
But I got some problem here, with the 2D ndarray transformation. At 36:07.
My X and Y are looking like this :
ndarray with shape (71787,)
And of course i got this error when I execute
X, y = sequences[:, :-1], sequences[:, -1]
IndexError: too many indices for array: array is 1-dimensional, but 2 were indexed
Did you already have this error before ? My lines are identical to yours.
Thank you
When Will you upload deep learning tutorial? Your voice is good.
Hi, this video is also fall under the deep learning. I think you are referring CNN videos. Please watch it here ruclips.net/p/PLc2rvfiptPSR3iwFp1VHVJFK4yAMo0wuF
@@KGPTalkie you are very humble. Good to have you as a guide.
Thank you sir for this tutorial..your teaching is very simple for understanding purpose..I want this artical link...can u send this artical
at 26 iam unable to get the get more RAM option and the code is showing error . please help me to sort this out
after doing model.fit(x, y, batch_size = 256, epochs = 100) it is showing error and unable to get ram option
This option is now removed from Google colab. You need to buy it from Google if you want.
Then what is the solution
@@KGPTalkie don't we have other options
@@KGPTalkiedon't we have an alternate solution??
please make more video on NLP
Thanks for this amazing tutorial. Follow the tutorial on my local machine and I get the following generated text:
"sir and and and and and and and and and and and and and and and and and and and and and and and and and ...."
Not sure what I'm doing wrong. I copy paste your code and the same happends.
Thanks!
hello sir last m n_word not found ara h what to do now?
Hello, is there any tutorial for doing this with a transformer model?
Hello Sir, Thank you for this tutorial. The explanation is very informative. I have a doubt. While writing the function generate_text_sequence, why did you use 0 in tokenizer.texts_to_sequences([seed_text])[0]? Also,, why did you add 1 to the vocabulary size? Please help me understand this. Thank you!
It generates a 2d array so to make it in 1d I used [0]. Like this [[value]][0] -> [value]
1 added because in run time algorithms needs 1 extra memory slot for unknown words. Means size of known words plus 1 for all unknown words.
@@KGPTalkie hi sir, when I was trying this, my RAM is not getting updated . What can I do sir
I am new to ML, This video is a great help in my learning. One thing, can you please tell how to make this offline, I can copy and run the code in my PC, but the issue is the model train every time. So, how to train once and use the model again for output generation?
After training use model.save('model.h5'). It will save model for later use. To use it later use model = load_model('model.h5')
@@KGPTalkie Thank you :)
would you pls share the code
How can I predict the punctuation in a text using the same?, pleaseeee help
Thanks for the video...why don't you use spacy or NLTK for text pre processing and why not word2vec for embedding purpose?
Thanks for watching. This is just basic way to do it. You can watch previous videos to know more about Spacy and word2vec.
@@KGPTalkie thanks for the reply..please make video on using spacy and word2vec with RNN if possible...
sudheer rao Yeah sure. Probably Next week.
Bro are you using ram or Virtual ram?. In jupyter the single epoch taking about 700s to train. Can you help me out brother?
Please Google CoLab and set runtime GPU or TPU. You might be running only on CPU.
Where did we use NLP here?
very interesting thank you very much!!!! can i change this code to spell checker work?
Yes you can!
@@KGPTalkie thank you very much. I am going to do 'Real word error detection and correction for Afaan Oromo using Deep learning approach 'for my final Msc thesis at bule hora university in Ethiopia .If you have time I will ask some some things later.
Yeah sure.
@@KGPTalkie But there is no GET RAM UPGRADE comes me,how fix such problem?
Hi, those options are not available now in CoLab. You need to get Colab Pro.
Hi Dude, thanks for the video.
Can you please help me the below doubts:
1. vocab_size = len(tokenizer.word_index) + 1 -> Why we are taking plus one here?
2. model.add(Embedding(vocab_size, 50, input_length=seq_length)) - Why input dimension is "vocab_size" here?
3. encoded = tokenizer.texts_to_sequences([seed_text])[0] -> What does '[0]' denotes here?
1. Because indexing starts from 0. So total length is last index plus 1
2. That is the size of total words for which vectors will be generated internally
3. It created list of list. So [0] only list. Like [[some data]][0] is equal to [some data].
Thanks for watching
@@KGPTalkie Thanks for reply but when I'm printing "tokenizer.word_index", it is showing index from 1 only so I didnt get the meaning of adding of +1.
Hi,
This is in regard to the code file which you requested for different topics.I request you to please get enrolled yourself and show your support and love to KGP Talkie. All the code files and video lectures have lifetime access with 30 Days money back Guarantee. Code and question-answer support are also available at Udemy.
Code files of RUclips lectures will be also available once you register in this course. Please send an email to udemy@kgptalkie.com with your registration details of this course and a list of other code files that you want.
I promise you to give FREE COUPONS for the next course on Deep Learning and ML. You can click on the link mentioned below and can get yourself enrolled!! bit.ly/udemy95off_kgptalkie
New content is added at Udemy:
1. Animation Plot [2 lectures]
2. Python Coding in Mobile [5 lectures]
3. Complete EDA of Boston Dataset [20 lectures]
What else we promise in this course
1. Kaggle data EDA
2. Text data EDA
3. More Animation Plot
4. More 3D plots
5. Figure Aesthetics and Decoration
6. Free coupons for next course
7. And so much more.
Hurry up!!! Only for a limited time.
Please email your details at udemy@kgptalkie.com for the FREE COUPONS of the next course.
Sir, could you provide link to data set?
Where is the dataset
Excellent video on NLP. But I am getting an error at generate_text_seq, it says the seed_text is not defined. Kindly guide me
Please define some text for this variable
@@KGPTalkie I did that too, now it retuns only one word instead 10/100 words. The last line of code "generate_text_seq(model, tokenizer, seq_length, seed_text, 100)"
Hey. I have same error. Have u resolved?
Hi, just pass some seed text to seed_text = "seed text Day"
Sir how the grammar is taken care off
Grammer is the most difficult part. It is not handled in this course.
@@KGPTalkie thanks for prompt reply sir, plz give a hint regarding this, incorporation of grammar plz sir
hi lakshmi , thanks for the video, i was just following step by step in google colab but i am not getting an option to get additional RAM. could you help me here please
Hi
Google have removed this feature. You can't get additional RAM.
@@KGPTalkie Then what's the solution?
Reduce the batch size
@@KGPTalkie I have tried but again the program crashed....what should I do, please help.
@@KGPTalkie Isn't there any solution? Do I need to skip the project?
Why we are taking 0 after encoded in generate function we made?
Hi please let me know time of video or notebook code line.
@@KGPTalkie Hello code line 79 @ 55:24 why we used [0] in the end?
Do we want to grab only the first encoding of the word?
And in pad sequences why we are using truncating as 'pre'?
That is seed word. I.e. You have to give first word to get started. There after it will automatically predict next one and so on.
Hi, Where is the notebook for this tutorial?
Please follow along the video. Do let me know if you get any error.
Hey, Is this model can be used to generate Hindi lyrics ( which are converted into English) ?? I am building a project and wanted to know. If anybody knows about this please reply.
I have not tested it yet
@@KGPTalkie means it works on English meaningful words, not on words let's say(tum hi ho)? If you test it just let me know. Thanks in advance
Sure.
Google is not allowing more ram
Hi that option is not available now. You need to use Colab Pro version
Can I get the SOURCE CODE
Hi,
Source codes are being made available at kgptalkie.com
Please keep an eye on it. NLP lessons will be uploaded in a week. You can find rest of codes there.
back again for cleaning you may easily use re
Yes. That is also an option. It is beauty of python, you have many ways to do same task.
Sir notebook for this video
why you always use random state as 42 only why not other numbers and how can we decide that which number we need to choose as random state...what does it exactly shows..i mean its role...ive a lots of doubt regarding this random state number...thank you
Random state could be anything. Fixing to any number make sure that whatever random number is generated will be always same. ML starts with some random weights and settings inside and this random number make sure that those settings are always same when you rerun or reproduce this result. random state could be anything. this you can also see as seed number for random generator. I would suggest you to read about the random generator and seed number in computer programs. I have chosen 42 as it is default setting otherwise you can select anything.
generate_text_seq(model, tokenizer, seq_length, seed_text, 100)
NameError Traceback (most recent call last)
in ()
----> 1 generate_text_seq(model, tokenizer, seq_length, seed_text, 100)
NameError: name 'seed_text' is not defined
at IN [78], please assign seed_text = lines[12343]
@@KGPTalkie I've replaced the "seed_text = lines[12343]", but there is the output for "generate_text_seq(model, tokenizer, seq_length, seed_text, 10)"
WARNING:tensorflow:From :8: Sequential.predict_classes (from tensorflow.python.keras.engine.sequential) is deprecated and will be removed after 2021-01-01.
Instructions for updating:
Please use instead:* `np.argmax(model.predict(x), axis=-1)`, if your model does multi-class classification (e.g. if it uses a `softmax` last-layer activation).* `(model.predict(x) > 0.5).astype("int32")`, if your model does binary classification (e.g. if it uses a `sigmoid` last-layer activation).
''
@@firzzahrin2940 hey bruh did you find any solution?
@@adityasutar790 The error literally tells you what to do. Just, instead of y_pred = model.predict_classes(encoded) , put: y_pred = np.argmax(model.predict(encoded), axis=-1)
Hey. I have same error of warning tensorflow. Have u resolved?
i take 6 mins to train 1 epoch... and it will cost 10 hours to train 100 epochs, anyone can help???
Which gpu are you using?
stackoverflow.com/questions/38559755/how-to-get-current-available-gpus-in-tensorflow
@@KGPTalkie oh i've missed this, it is ok now after i change the runtime type thanks!!!
@@KGPTalkie may i also ask that can i read and predict the data in csv file instead of passage in the website link?
Sir I need help deep recurrent neural network code
Zubair, You can comment below your query. We will try to answer it.
@@KGPTalkie sir I have paper and code but I am facing problem in code . please help.
bugtriage.mybluemix.net/ and please next video lecture on on this topic