This is super awesome. I have worked as 'search engineer' who already understood TF/IDF and was trying to understand BM25 for a long time -- and it just flew over my head. Yours is the first explanation that just clicked for me. Thank you!!
What a superb explanation! I really appreciate how you broke down the problem and its solution into manageable pieces. Your clarity and approach are consistently impressive! 🍻
🎯 Key points for quick navigation: 00:00:01 *📚 Introduction to TF-IDF and BM25* - TF-IDF and BM25 are essential for ensuring the textual relevance of search results, - TF-IDF combines term frequency and inverse document frequency to rank documents by relevance, - Importance of considering query uniqueness in relevance scoring. 00:02:10 *📝 Limitations of TF-IDF* - TF-IDF does not account for document length, leading to potential misranking, - Relevance can be skewed by term frequency without considering document brevity, - Real-world example contrasting document length's effect on relevance. 00:05:00 *🚀 Need for the New Metric: BM25* - Introducing BM25 to address TF-IDF's shortcomings, - BM25 brings diminishing returns to term frequency, combating keyword stuffing, - Importance of penalizing longer documents to prevent manipulation. 00:08:03 *🔍 Understanding BM25 Structure* - Explanation of the BM25 formula and its components, - Iterative improvement from previous versions to achieve optimal matching, - B and K are tunable parameters impacting document ranking. 00:12:38 *📈 BM25 Partial Derivatives and Impact* - Partial derivatives show property changes in document scoring, - Positive yet diminishing returns for increasing term frequency, - Negative impact from increasing document length versus average. 00:16:49 *🎯 Demonstrating BM25's Superiority Over TF-IDF* - Practical example contrasting BM25 with TF-IDF in document scoring, - BM25 selects concise and relevant documents over lengthier non-relevant ones, - Encouragement to understand complexities only when performance improvements are made. Made with HARPA AI
Thanks for your video! Doesn't TF already take into account the length of the document? It's a proportion of the number of times a word appears out of the total number of words in the document. So, issue #1 wouldn't be a problem?
yeah I was thinking that too, and long documents don't really matter as well because the ratios are equivalent, like 1/10 = 100/1000, so I don't really get it as well
Frequency is used as is and what you're talking about is frequency normalised with the length of the document. While that is fine, it scales linearly whereas BM25 is exponential and gives lesser score as the proportion increases instead of a constant increase, which produces better results in real use cases
TF takes only the current document into account in terms of q. For example, if I have a 1.Doc1: "Cat cat cat dog dog dog dog dog dog" 2 Doc2: "Cat cat" 3 Doc3: "dog" for query : "cat" TF will be higher for Doc1 because Doc1 has cat 3 times compared to 2 times in Doc2. Similarly doc1 will be higher score than Doc3 for query "dog" -- effectively longer the document higher the score without any penalty. In extreme case, if you create a Document that is 10x the length of average document, it will rank higher. Thats the problem with TF/IDF --- it does not take into account the length of the document which BM25 does with the term 'theta'. Note that IDF only takes frequency of query term "cat" across the whole corpus into account but not the document length.
I suppose we are doing that here as well, but instead of treating each document as IID and hence normalising, we are also taking into account the relative difference in size. It's like a weighted normalisation. Plain normalisation would be like comparing bunch of sigmoids with cross entropy. More specifically we are trying to take mutual information amongst the docs into account while calculations.
Why would you use additional shorthand when trying to teach something. I get that paper is small, but adding 1 more thing to keep track off for the learner is bad teaching strategy.
This is super awesome. I have worked as 'search engineer' who already understood TF/IDF and was trying to understand BM25 for a long time -- and it just flew over my head. Yours is the first explanation that just clicked for me. Thank you!!
This is way way more informative than my lecturer's lecture
Not only does this video explain the specific metric, but it also teaches how to analyze metrics! amazing!
Glad it was helpful!
What a superb explanation! I really appreciate how you broke down the problem and its solution into manageable pieces. Your clarity and approach are consistently impressive! 🍻
Simply an awesome explanation! Congrats
Glad you liked it!
Remarkably well explained and with such concise elegance too. Extra +100 pts for explaining in laymans term what a partial derivative is
Analyzing an equation using derivatives is brilliant. Thanks for yet another outstanding video, as always.
🎯 Key points for quick navigation:
00:00:01 *📚 Introduction to TF-IDF and BM25*
- TF-IDF and BM25 are essential for ensuring the textual relevance of search results,
- TF-IDF combines term frequency and inverse document frequency to rank documents by relevance,
- Importance of considering query uniqueness in relevance scoring.
00:02:10 *📝 Limitations of TF-IDF*
- TF-IDF does not account for document length, leading to potential misranking,
- Relevance can be skewed by term frequency without considering document brevity,
- Real-world example contrasting document length's effect on relevance.
00:05:00 *🚀 Need for the New Metric: BM25*
- Introducing BM25 to address TF-IDF's shortcomings,
- BM25 brings diminishing returns to term frequency, combating keyword stuffing,
- Importance of penalizing longer documents to prevent manipulation.
00:08:03 *🔍 Understanding BM25 Structure*
- Explanation of the BM25 formula and its components,
- Iterative improvement from previous versions to achieve optimal matching,
- B and K are tunable parameters impacting document ranking.
00:12:38 *📈 BM25 Partial Derivatives and Impact*
- Partial derivatives show property changes in document scoring,
- Positive yet diminishing returns for increasing term frequency,
- Negative impact from increasing document length versus average.
00:16:49 *🎯 Demonstrating BM25's Superiority Over TF-IDF*
- Practical example contrasting BM25 with TF-IDF in document scoring,
- BM25 selects concise and relevant documents over lengthier non-relevant ones,
- Encouragement to understand complexities only when performance improvements are made.
Made with HARPA AI
This was wonderfully explained, thank you for the great video!
Great explanation, thanks!
Awesome video, very clear and helpful!
Thanks for your video! Doesn't TF already take into account the length of the document? It's a proportion of the number of times a word appears out of the total number of words in the document. So, issue #1 wouldn't be a problem?
yeah I was thinking that too, and long documents don't really matter as well because the ratios are equivalent, like 1/10 = 100/1000, so I don't really get it as well
Rightly pointed out! but yes other portions of the problem makes sense.
Frequency is used as is and what you're talking about is frequency normalised with the length of the document. While that is fine, it scales linearly whereas BM25 is exponential and gives lesser score as the proportion increases instead of a constant increase, which produces better results in real use cases
TF takes only the current document into account in terms of q. For example, if I have a
1.Doc1: "Cat cat cat dog dog dog dog dog dog"
2 Doc2: "Cat cat"
3 Doc3: "dog"
for query : "cat"
TF will be higher for Doc1 because Doc1 has cat 3 times compared to 2 times in Doc2. Similarly doc1 will be higher score than Doc3 for query "dog" -- effectively longer the document higher the score without any penalty. In extreme case, if you create a Document that is 10x the length of average document, it will rank higher. Thats the problem with TF/IDF --- it does not take into account the length of the document which BM25 does with the term 'theta'.
Note that IDF only takes frequency of query term "cat" across the whole corpus into account but not the document length.
what a great explanation
randomly landed on your channel, your explanations are fundamentally so awesome 🙏🏽
This particular IDF has nothing to do with occupation, apartheid, or genocide.
isn't the term frequency already dependent on the total number of words in the document?
So why not just apply length normalization to the document so that tf of cat in A = 1/10 and tf of cat in B = 10/1000?
I suppose we are doing that here as well, but instead of treating each document as IID and hence normalising, we are also taking into account the relative difference in size. It's like a weighted normalisation.
Plain normalisation would be like comparing bunch of sigmoids with cross entropy.
More specifically we are trying to take mutual information amongst the docs into account while calculations.
Why would you use additional shorthand when trying to teach something. I get that paper is small, but adding 1 more thing to keep track off for the learner is bad teaching strategy.