Finally a simple, clear and complete explanation of this word embeddings. Thank you so much. I've studied and searched for months and you've just made the whole thing clear and straight forward.
I really don't comment but this is just great. Really helped me out with my college assignment. I was tearing my hair out before this. Genuinely thank you.
May Allah increase your knowledge my friend, i am just starting with NLP and you help me learn the basics well. I didnt know what one hot embedding means. Thanks and keep it up. Also could you please do a project end to end in NLP for example sentiment analysis ? I want to see these techniques in real action. Thanks again :)
I find ur videos very helpful, many topics which are not explained well in other channels, i come to ur channel to see if it is available and u have never disappointed me. Keep up the good work. Thanks.
@Unfold Data Science In the starting whatever encoding ur doing, I don't think it is one hot encoding instead it's a BOW representation...Correct me if i m wrong.
Precise explanation. Very useful. Thank you Aman. Just one suggestion, visually it will be more soothing if you could follow straight line on your whiteboard.
Hi, I have an experience of 8 years in IT(Programming) and started preparing for data science career transition. My query is How can I put up in Resume. Shall I tell them that, I am new to this and Know all the concept and worked on kaggle datasets? Will they allow or reject since no past experience on data science? looking forward for your valuable comments.
Hi, The query needs some bit of understanding your profile and giving suggestion I see you are looking for some career guidance, If yes, you can reach me directly for one on one mentorship. You can go to "About" section in the channel and click the link "One to One Mentorship Link"
Sir, I have one query that as input we need to feed the one hot representation of the surrounding words including the one hot vector of the target word or excluding so?
@@UnfoldDataScience okay sir..then in case of getting all the word embeddings of the input sentence..one hot encode of the words under the window passed through CBOW and find the middle target word using the feature based information.. then slide the window and repeat the same untill the end... is it the process sir?
Great explanation and well done, my question is how the numbers can be selected to set the features? I mean, in your example at time (5:00) of your video, you assigned 0.9 to the apple and 0.85 to mango ..etc, how these numbers can be assigned?
Normally we take 300 dimension however if our vocab size is huge, then processing might be difficult with more number of dimension hence we try limiting it to 200 or so. If you do not have infra issue, you can go for 500 or even more.
@@UnfoldDataScience so it is considered as a hyperparameter?...can we change this in tunning. and please make video on optuna, biasen hyperparameter tunnign.
Masha ALLAH, i have searched for three years to get a simple explanation of w2v until i found your video today. U have done a great job. Thanks alot
Your comment made my day Abdullahi. thanks for such motivation. keep watching.
Finally a simple, clear and complete explanation of this word embeddings. Thank you so much. I've studied and searched for months and you've just made the whole thing clear and straight forward.
I've been searching for a comprehensive explanation without the technical jargons. you are the best teacher
Thanks Solomon.
You have a gift.
A natural teacher.
Simplified and concise, Thank you Sir.
I really don't comment but this is just great. Really helped me out with my college assignment. I was tearing my hair out before this. Genuinely thank you.
Glad it helped!
your explanation is simple clear and best
Thanks Pankaj.
You video is very simple to undestand, thank you man!
Welcome
Finally an understandable explanation of how those methods work. Short and to the point, thanks for this video.
Jaise hi word or vector m difference smjh aaya i 👍 ❤️ like it seriously..after that I HV seen full vedio..osam explanation
i spend too much time to get logic behind woed2vec but now i am satisfy
Thanks for watching.
Thank you! I got the basic concepts about word2vec.
I have written the same in one of your previously watched video,but still I want to write it...You are really unfolding Data Science.
Thanks Uwais. Your words mean a lot.
Literally can not make it more simpler than this.
Thanks a lot Rahman.
crazzzzyyy. You make things so simple!!!!
May Allah increase your knowledge my friend, i am just starting with NLP and you help me learn the basics well. I didnt know what one hot embedding means. Thanks and keep it up.
Also could you please do a project end to end in NLP for example sentiment analysis ?
I want to see these techniques in real action.
Thanks again :)
I find ur videos very helpful, many topics which are not explained well in other channels, i come to ur channel to see if it is available and u have never disappointed me. Keep up the good work. Thanks.
Thanks Sachin. Please share with friends also
Perfectly Explained , hats off to u!! Thank You
You're most welcome
simply explained, Thank you
Thank you, Aman, I really appreciate your simple explanation.
Most Welcome. Your comments are precious for me.
As always, superbly simple and great explanation. Many thanks.
Welcome Rohan.
Just sending love and regards seem like a good person and appreciating your efforts to teach others,
Very simplified about Wrod2vec....Thanks and all the best
Glad you liked it
Nice explanation
Keep watching Pranjal.
Very intuitive, thx man
welcome Jialai.
Thank you so much sir.. From the documentations I was confused.. Thanks a lot for such cleared and easy explanation..
Welcome Piyali.
finished watching
one of the finest explanation, thanks bro
Thank u.
@Unfold Data Science In the starting whatever encoding ur doing, I don't think it is one hot encoding instead it's a BOW representation...Correct me if i m wrong.
wil check
thank you sir , nice explanation
great aman . tactics of ur teaching is really tremendous
Good job sir, thank you
Very nicely explained in a very simple way.
Thanks Karthick.
Thanku bhaiyya.
Welcome Suhanshu.
Aman bhai, thanks for excellent explanation.
Always welcome Gurminder Bhai.
Amazing! so well explained in simple terms.
Thanks Uday
Great video, straight to the point and easy to understand.
Cheers.
you mean sparse vector of word will give dense vector. how features are selected and how values are assigned
?
thank you for this clear explanation!
Thank you
Precise explanation. Very useful. Thank you Aman. Just one suggestion, visually it will be more soothing if you could follow straight line on your whiteboard.
Thanks a lot for suggestion.
Thanks Aman
well explained and in a simplified way. Thanks a lot for this video
Glad it was helpful Abhishek.
good and simple explaanation
Thanks for liking Sharat.
Very good introduction to Word2Vec. Would have preferred a bigger writing space, but nonetheless was very clear
Glad it was helpful Kevin.
when Legends start to teach everything becomes ezzzz!!!!!
Thanks Anurodh.
awesome video sir, really awesome.
So nice of you Aniket. Your comments are precious for me.
Good work.
Thank you Omkar.
very informatics pls continue.
Thank you Sunil, I will
great explanation
Thanks Sandipan.
humble and nice presentation :)
Thanks a lot.
Very good explanation!
Thanks Raghava.
Great explanation. thanks a lot.
You are welcome! Tabassum.
Brother, great explanation. I will appreciate if could just tell where I can found next video
Please check in my playlist for NLP
Best man
Thanks again.
great work 👍👍
Man, you are great. Thanks a lot. Do you have any doc2vec video or playlist?
Thanks a lot, doc2vec, I will create. Have not created yet.
Very helpful explanation. Looking forward to seeing more. Thanks in advance.
Glad it was helpful!
thank you
You're welcome Sudip.
Hi,
I have an experience of 8 years in IT(Programming) and started preparing for data science career transition.
My query is How can I put up in Resume.
Shall I tell them that, I am new to this and Know all the concept and worked on kaggle datasets?
Will they allow or reject since no past experience on data science?
looking forward for your valuable comments.
Hi,
The query needs some bit of understanding your profile and giving suggestion
I see you are looking for some career guidance,
If yes, you can reach me directly for one on one mentorship.
You can go to "About" section in the channel and click the link "One to One Mentorship Link"
Great stuff! Couldn't find a better video with such well assembled content.
Glad it was helpful!
thank you!
You're welcome!
You explained nicely. Please use a bigger screen.
Thanks for watching and Feedback Binay.
Great sir. please sir also share about pos tagging using this approach
Sure, will do Shaheen.
Informative video... (y) Or in m sy best algorithm konsa h text classification tasks k liy cbow ya skip gram?
Depends on what kind of data you are working for.
Thanks :) Great explanations
Welcome.
Sir, I have one query that as input we need to feed the one hot representation of the surrounding words including the one hot vector of the target word or excluding so?
Hi Piyali. Excluding. For target you might need to do separate encoding.
@@UnfoldDataScience okay sir..then in case of getting all the word embeddings of the input sentence..one hot encode of the words under the window passed through CBOW and find the middle target word using the feature based information.. then slide the window and repeat the same untill the end... is it the process sir?
Owosome Men, I Loved the explanation.
Glad you liked it
Great explanation and well done, my question is how the numbers can be selected to set the features? I mean, in your example at time (5:00) of your video, you assigned 0.9 to the apple and 0.85 to mango ..etc, how these numbers can be assigned?
We do not assign manually, framework assigns.
How to write a program to implement continuous bag of words model using knn algorithm (using python).
Hi Nagitha, You can convert word to number and run KNN.
@@UnfoldDataScience could u please tell in detail I'm new to this course I donno how to do if possible please provide the code with explanation
i have one question since bag of words has issue so isn't it used anywhere ?
Mostly not used as it's very simple technique
How Word2Vec will handle for Test data set?
We need to create word vector for test and train in similar way.
How to increase the Dimensionality of feature vector
By increasing output size more than 300 units you can get vector with higher dimensions and increase the model quality
THanks Vijeth.
great stuff, where this application mostly used?
Thanks Prashanth, in many NLP application where we need to capture semantic meaning of sentence.
Sir aap ne graduation kiss subject se kiya hai.
Hi Rawat, I am BTECH.
@@UnfoldDataScience B.tech Mechnical ya koi aur
@@UnfoldDataScience ??
please do not put ads in between it creates distraction otherwise lecture is good
Great stuff!
Thanks Aditya.
Good one again!
Thanks again! Happy Learning!
Great job bro!
Thanks for the visit Syed.
bro how can we decide vector space size ex. (5000 words*300 vectors)...here 5000 is our vocab size, but how can we decide the 300 lengths...
Normally we take 300 dimension however if our vocab size is huge, then processing might be difficult with more number of dimension hence we try limiting it to 200 or so.
If you do not have infra issue, you can go for 500 or even more.
@@UnfoldDataScience Thanks....bro
@@UnfoldDataScience so it is considered as a hyperparameter?...can we change this in tunning. and please make video on optuna, biasen hyperparameter tunnign.
Are you using CBOW or Skipgram
I Think CBOW, not sure though, the default one.
is nothing BUTT!
Cheers 🍻