Thanks for your video! Doesn't TF already take into account the length of the document? It's a proportion of the number of times a word appears out of the total number of words in the document. So, issue #1 wouldn't be a problem?
yeah I was thinking that too, and long documents don't really matter as well because the ratios are equivalent, like 1/10 = 100/1000, so I don't really get it as well
Frequency is used as is and what you're talking about is frequency normalised with the length of the document. While that is fine, it scales linearly whereas BM25 is exponential and gives lesser score as the proportion increases instead of a constant increase, which produces better results in real use cases
What a superb explanation! I really appreciate how you broke down the problem and its solution into manageable pieces. Your clarity and approach are consistently impressive! 🍻
I suppose we are doing that here as well, but instead of treating each document as IID and hence normalising, we are also taking into account the relative difference in size. It's like a weighted normalisation. Plain normalisation would be like comparing bunch of sigmoids with cross entropy. More specifically we are trying to take mutual information amongst the docs into account while calculations.
Why would you use additional shorthand when trying to teach something. I get that paper is small, but adding 1 more thing to keep track off for the learner is bad teaching strategy.
This particular IDF has nothing to do with occupation, apartheid, or genocide.
Thanks for your video! Doesn't TF already take into account the length of the document? It's a proportion of the number of times a word appears out of the total number of words in the document. So, issue #1 wouldn't be a problem?
yeah I was thinking that too, and long documents don't really matter as well because the ratios are equivalent, like 1/10 = 100/1000, so I don't really get it as well
Rightly pointed out! but yes other portions of the problem makes sense.
Frequency is used as is and what you're talking about is frequency normalised with the length of the document. While that is fine, it scales linearly whereas BM25 is exponential and gives lesser score as the proportion increases instead of a constant increase, which produces better results in real use cases
This is way way more informative than my lecturer's lecture
What a superb explanation! I really appreciate how you broke down the problem and its solution into manageable pieces. Your clarity and approach are consistently impressive! 🍻
This was wonderfully explained, thank you for the great video!
isn't the term frequency already dependent on the total number of words in the document?
Great explanation, thanks!
Remarkably well explained and with such concise elegance too. Extra +100 pts for explaining in laymans term what a partial derivative is
Not only does this video explain the specific metric, but it also teaches how to analyze metrics! amazing!
Glad it was helpful!
what a great explanation
Analyzing an equation using derivatives is brilliant. Thanks for yet another outstanding video, as always.
Awesome video, very clear and helpful!
So why not just apply length normalization to the document so that tf of cat in A = 1/10 and tf of cat in B = 10/1000?
I suppose we are doing that here as well, but instead of treating each document as IID and hence normalising, we are also taking into account the relative difference in size. It's like a weighted normalisation.
Plain normalisation would be like comparing bunch of sigmoids with cross entropy.
More specifically we are trying to take mutual information amongst the docs into account while calculations.
randomly landed on your channel, your explanations are fundamentally so awesome 🙏🏽
Why would you use additional shorthand when trying to teach something. I get that paper is small, but adding 1 more thing to keep track off for the learner is bad teaching strategy.