He doesn't just tell you to use bloom filters or what they are useful for, he actually explains it from scratch with such simplicity. He is an epitome of a great teacher.
Very interesting data structure. The biggest challenge here is to write good hash functions which is not easy. Also in order to reduce collisions, rather than increasing the size of the bloom filter, I would prefer to use multiple bloom fliters and assign some hash functions to one filter and some functions to other filters etc.
Exceptional explanation! Very clear and well done because you explained how it works, but most importantly "why / when should someone use the bloom filter?". I think the answer of the question: "Why / When is this usefull?" is missing a lot of videos.
Great video & Great Explanation. I was asked this question in an Interview when i had completely no idea about bloom filter. I doubt if anyone can come up with this idea of storing usernames in an interview of 1 hour.
great explanation.I have 2 doubts 1.What if after sometime all the values in bit array are set to 1.Then for all the searches it will be always yes. 2.If we have to remove an entry or word then how to reset values in bloom filter as the same hash value can be for some other word.
1. Yes. Hence the need to use a bigger bucket of values, and maybe as fewer hash functions as possible, so as to avoid this scenario. (Doing this could also help prevent collisions and as a result, the number of false positives) 2. That's a pre-requisite of bloom filters. You CANNOT delete values from it once entered.
For this and other probabilistic data structures and algorithms with the explanations of these "edge" cases, take a look at my recently published book "Probabilistic Data Structures and Algorithms for Big Data Applications".
@@sunilr360 Limitation in point 2 can be removed with counting bloom filters. Instead of a bit array, you need to keep a byte/int array. (en.wikipedia.org/wiki/Counting_Bloom_filter). While inserting a word, you need to increment values in those array positions by 1 (which are returned by k hash functions) but it takes more space. While inserting a word, While deleting a word, you need to reduce 1 from every position that was incremented when it was entered.
Great video but I have one question: Would it be better if you had one bit vector for each hash function? I don't understand why the values would be co-mingled from the different hash functions when you could have them in separate vectors.
I think we can also use "Trie" Data Structure for the problem talked about in the first few minutes. Keep on inserting the elements in the trie and then if we want to search for 'CAR', we can directly check it in O(lengthOfWord) time. Isn't it? I'm not comparing Trie and BF but just suggesting another Data Structure which can be used. Thanks!
Thanks Naren for the video. Your video did not explain How "Hen" understood that there is a collision and added probabilistic numbers? as the algorithm is checking the bits set in the bitarray. Am i missing something here?
Thanks for such an insightful explanation. I am really inspired by your video .Could you please suggest ho did you find these topics or syllabus and from where do you get such detailed and precise information.Thanks
Really appreciate Narendra. But why a lookup table is not considered efficient for the example provided? Based on cardinality of the column (values), we can either use bitmap or binary index.
@xyz The main problem of the lookup/hash table is the memory since its classical implementation requires to store real values indexed by, for instance, hash values. But the size of the elements could be quite big, e.g. some object in the database, or hard to produce, e.g. involved disk scan. With bloom filter we don't need to store values at, just to check if they exist or not. Consider using such bloom filter to optimize database check for objects. Before asking DB to check if object physically exists and perform the query, we can first check it in Bloom Filter and only if BF said it "may exist", then we perform the actual query. ant at all until the hash function can provide constant time to generate the value. Take a look at my recently published book "Probabilistic Data Structures and Algorithms for Big Data Applications" (pdsa.gakhov.com) for other data structures and such use case explained.
@@gakhov Appreciate Andrii for clarifying. Yes, its better to know if a value exists or not before even querying the DB. Go to the DB only if you know the value may exist in the DB.
Superb video sir . Quick question : Is there a solution to handle the size of bit array dynamically ? if our data increases we need to rehash this data or we will be using consistent hashing to avoid this during a rehash ?
1. Shard the bit space across multiple nodes and do finds/queries with potentially O(K) network lookups when checking for existence. 2. Employ consistent hashing over the original entry to make sure only certain elements go to certain bloom filters hosted on each node (with only 1 network lookup when checking for existence). Although this isn't really a "distributed bloom filter" as much as it's employing a bloom filter on a given node that is guaranteed to only get a certain amount of the key space. That being said, you probably don't have a realistic need for doing something like this. Most of the advantages of using bloom filters are fast lookup and local storage without IO hops (no touching network or disk ideally).
Take a look at my recently published book "Probabilistic Data Structures and Algorithms for Big Data Applications" (pdsa.gakhov.com) for the comparison. Simplifying, they solve different problems, the BllomFilter is designed to answer the question "Is element exist or not" (membership problem), while Count-Sketch Min answer the questions "How many time this element has been stored" (the frequency problem)
Can Trie data structure be used to check if username is present? It wont take up lot of space, and lookup will be O(1) right? Of course, it requires all the usernames to be stored in Trie before lookup. What are the drawbacks of this? Pls reply your thoughts on this
So the case where Bloom Filter is used for Malicious URL detection, what happens if the Filter says that the URL is malicious? Does the browser now send a request to google with the particular URL for a confirmation, as the Bloom filter "Yes" would just just be a probability dependant answer? Or does it straight forward says that the URL is malicious?
If you check the ratio of malicious vs good URLs browsed by user it will be 200:1. So its OK for browser to ask server for confirmation. Also if you use more hash functions in Bloomfilter then the probability of error decreases to less than 0.1% In that case you can make a parallel call to server if you doesn't want add latency when user visits potentially bad links.
@@TechDummiesNarendraL A parallel call? Well, That's interesting. I can think of two scenarios where we can utilise parallel calls. Please let me know which one were you suggesting. 1) While we are calculating the hash function values for the URL, we parallely make a call to the server, whichever is faster will be used. a) If the server response comes first, we use it. b) If Bloom Filter gives a result first and the result is a "NO", we use it. c) If Bloom Filter gives a result first and the result is a "PROBABLE YES", we wait for server confirmation. The wait time here will be less, if not zero, as the call was already made before starting with hash calculations. 2) In case Bloom Filter detects it as Malicious, we probably let the user visit the site, and meanwhile send a request to server for a confirmation. And now, if the server confirms it as Malicious, do we now show a notification to the user? Because as it was a Parallel call, the user might have already visited the site before a response came from the server.
You can't use "hash collision" as a con of using hash table here. Because the consequence of "hash collision" in bloom filter is also bad (ie. resulting in smaller and smaller confidence level). Thus further saying a possible addition of diskIO here, is just not comparable..
A question. Let's say that I have 100 elements. I will have to create a bit array of length 100 for implementing bloom filter. Let's assume that I have 3 hash functions. Now, initially all the values in the bit array are set to zero and as I get a element to verify if its in the database, I pass the new element from 3 hash functions and thus I get 3 distinct values. In best case scenario, I will have all my indexes filled with 1 after checking 33-34 elements. After that whichever element comes, and I pass that element from 3 hash functions, there is a very high probability that the indexes are already set to 1 and thus even if the element is a non-existing one, it gets rejected. I understand that bloom filter is probabilistic, but in the above example, after 33-34 elements all the elements will be rejected as the bloom filter is completely filled. This seems to be very inefficient. Can i get some help?
If you're ok with taking up more space, you can keep a list of numbers and increment each index by 1 each time an item is added and decrement each index by 1 each time an item is removed. This concept is called reference counting.
why would you read hash entry from DISK ?!!! Generally it will be in Memory ! Collision may be there but if a good hash function is used then it is not a big issue.
Nice job buddy. I have a question: Just imagine that I have a list or table and whenever I add a new item I calculate the two hash and keep it in binary array exactly same as what you explained. But when I remove one item logically I should 0 the positions in the byte array, right? The problem comes when the position is shared with another keyword. What should do in that case?
He doesn't just tell you to use bloom filters or what they are useful for, he actually explains it from scratch with such simplicity. He is an epitome of a great teacher.
But when to use it?
Best explanation on Bloom Filter on RUclips probably
How have I been programming so long and never used this. Incredibly elegant!
Best explanation I've seen online 👍
This is a great explanation, and I love how it's complete with examples/applications. Thanks!
Very interesting data structure. The biggest challenge here is to write good hash functions which is not easy. Also in order to reduce collisions, rather than increasing the size of the bloom filter, I would prefer to use multiple bloom fliters and assign some hash functions to one filter and some functions to other filters etc.
BRO I LITERALLY FORT THIS TOPIC. thank you youtube.
outstanding work! I know Bloom Filter now.
You and gaurav sen are going to make me a lot of money one day. Here’s a 👍
@Tech Dummies. thank you for all your hard work . i have learned a lot from you.
Exceptional explanation!
Very clear and well done because you explained how it works, but most importantly "why / when should someone use the bloom filter?".
I think the answer of the question: "Why / When is this usefull?" is missing a lot of videos.
Awesome video. The concept is crystal clear now
You rock, man! I'm addicted to your videos.
Great video & Great Explanation.
I was asked this question in an Interview when i had completely no idea about bloom filter. I doubt if anyone can come up with this idea of storing usernames in an interview of 1 hour.
This is one of the best data structures I've ever seen.
Hey Narendra. great stuff!... crisp and clear explanation...
Awesome explanation - easy enough for kids to learn - thank you :).
Tnx for help. You just made my exam easier.
Awesome Video & fantastic basic level of understanding on bloom filter. Thank You so much.
Best Video on BloomFilter.
Best explanation of bloom filter
Thank you. Now I am much clear understanding about the BF! Like to learn more the hash algorithms.
It was very helpful. Well explained.
great explanation.I have 2 doubts
1.What if after sometime all the values in bit array are set to 1.Then for all the searches it will be always yes.
2.If we have to remove an entry or word then how to reset values in bloom filter as the same hash value can be for some other word.
1. Yes. Hence the need to use a bigger bucket of values, and maybe as fewer hash functions as possible, so as to avoid this scenario.
(Doing this could also help prevent collisions and as a result, the number of false positives)
2. That's a pre-requisite of bloom filters. You CANNOT delete values from it once entered.
For this and other probabilistic data structures and algorithms with the explanations of these "edge" cases, take a look at my recently published book "Probabilistic Data Structures and Algorithms for Big Data Applications".
@@sunilr360 Limitation in point 2 can be removed with counting bloom filters. Instead of a bit array, you need to keep a byte/int array. (en.wikipedia.org/wiki/Counting_Bloom_filter). While inserting a word, you need to increment values in those array positions by 1 (which are returned by k hash functions) but it takes more space. While inserting a word, While deleting a word, you need to reduce 1 from every position that was incremented when it was entered.
2 can be addressed by counting bloom filters
Thank you . Good content. Conscience and to the point.
Your explanation style is nice.
Very interesting concept and explained in very good way. Thank you so much
Loved the Linkin Park t-shirt! :D
Excellent explanation, sir❤
Well explained !! Thanks for the video !!!
Can we use trie data structure?
Very nicely explained!! Thanks.
Great video but I have one question: Would it be better if you had one bit vector for each hash function? I don't understand why the values would be co-mingled from the different hash functions when you could have them in separate vectors.
Thank you! very good tutoriel! Plz keep giving us more videos!!!
Amazing work! Thank you 🌻
That was a great and amazing explanation, congratulation for this video. It was a very great job.
I think we can also use "Trie" Data Structure for the problem talked about in the first few minutes. Keep on inserting the elements in the trie and then if we want to search for 'CAR', we can directly check it in O(lengthOfWord) time. Isn't it?
I'm not comparing Trie and BF but just suggesting another Data Structure which can be used.
Thanks!
Yes, but which one is better O(L) or O(1) ?
@@TechDummiesNarendraL O(1) :D
@@TechDummiesNarendraLdepends on memory constraints
Mind Blowing Video !!! Thanks
Thanks Naren for the video. Your video did not explain How "Hen" understood that there is a collision and added probabilistic numbers? as the algorithm is checking the bits set in the bitarray. Am i missing something here?
Reallt nice explanation. thanks. Also awesomr real life examples.
Thanks for such an insightful explanation. I am really inspired by your video .Could you please suggest ho did you find these topics or syllabus and from where do you get such detailed and precise information.Thanks
Suggestion: Can you do some location based algorithm questions ? like s2 library algos.
Really appreciate Narendra. But why a lookup table is not considered efficient for the example provided? Based on cardinality of the column (values), we can either use bitmap or binary index.
@xyz The main problem of the lookup/hash table is the memory since its classical implementation requires to store real values indexed by, for instance, hash values. But the size of the elements could be quite big, e.g. some object in the database, or hard to produce, e.g. involved disk scan. With bloom filter we don't need to store values at, just to check if they exist or not. Consider using such bloom filter to optimize database check for objects. Before asking DB to check if object physically exists and perform the query, we can first check it in Bloom Filter and only if BF said it "may exist", then we perform the actual query.
ant at all until the hash function can provide constant time to generate the value.
Take a look at my recently published book "Probabilistic Data Structures and Algorithms for Big Data Applications" (pdsa.gakhov.com) for other data structures and such use case explained.
@@gakhov Appreciate Andrii for clarifying. Yes, its better to know if a value exists or not before even querying the DB. Go to the DB only if you know the value may exist in the DB.
A board, a marker and a great mind. Good job.
thank you sir, very well and clearly rxplained
Awesome. Mind Blown!
I would like to see how you compute the probability of error, storage requirements, and K Hash functions to use. Maybe in a seperate video?
Very nice explanation..
Superb video sir . Quick question : Is there a solution to handle the size of bit array dynamically ? if our data increases we need to rehash this data or we will be using consistent hashing to avoid this during a rehash ?
awesome video in simple language
Great explanation! Waiting for Count-min sketch and comparison with BF. Thanks
Thanks and sure I will do Count-min sketch Algo.
How will bloom filter work in distributed environment? Can we store the bit array in multiple nodes?
1. Shard the bit space across multiple nodes and do finds/queries with potentially O(K) network lookups when checking for existence.
2. Employ consistent hashing over the original entry to make sure only certain elements go to certain bloom filters hosted on each node (with only 1 network lookup when checking for existence). Although this isn't really a "distributed bloom filter" as much as it's employing a bloom filter on a given node that is guaranteed to only get a certain amount of the key space.
That being said, you probably don't have a realistic need for doing something like this. Most of the advantages of using bloom filters are fast lookup and local storage without IO hops (no touching network or disk ideally).
Great explanation. Thank you
great one thanks ! may be you can compare uses cases for count-min sketch vs bloom filters both being probabilistic ds
Take a look at my recently published book "Probabilistic Data Structures and Algorithms for Big Data Applications" (pdsa.gakhov.com) for the comparison. Simplifying, they solve different problems, the BllomFilter is designed to answer the question "Is element exist or not" (membership problem), while Count-Sketch Min answer the questions "How many time this element has been stored" (the frequency problem)
Great information. Thanks a lot
Can Trie data structure be used to check if username is present? It wont take up lot of space, and lookup will be O(1) right? Of course, it requires all the usernames to be stored in Trie before lookup. What are the drawbacks of this? Pls reply your thoughts on this
Time complexity with BF would be K time O(1) instead of O(K)?
Anyhow it will be constant as you mentioned.
Test Sir.. Very nice explanation.. Please make more videos on system design and on other things as well.
Great video. Thanks!
great stuff, thanks for the video.
Cool Explanation....Next time please Check the Mike Quality as voice is some what not hearable in few places...but thanks alot
awesome!! It is much helpful..
So the case where Bloom Filter is used for Malicious URL detection, what happens if the Filter says that the URL is malicious? Does the browser now send a request to google with the particular URL for a confirmation, as the Bloom filter "Yes" would just just be a probability dependant answer? Or does it straight forward says that the URL is malicious?
If you check the ratio of malicious vs good URLs browsed by user it will be 200:1. So its OK for browser to ask server for confirmation.
Also if you use more hash functions in Bloomfilter then the probability of error decreases to less than 0.1%
In that case you can make a parallel call to server if you doesn't want add latency when user visits potentially bad links.
@@TechDummiesNarendraL A parallel call? Well, That's interesting. I can think of two scenarios where we can utilise parallel calls. Please let me know which one were you suggesting.
1) While we are calculating the hash function values for the URL, we parallely make a call to the server, whichever is faster will be used.
a) If the server response comes first, we use it.
b) If Bloom Filter gives a result first and the result is a "NO", we use it.
c) If Bloom Filter gives a result first and the result is a "PROBABLE YES", we wait for server confirmation. The wait time here will be less, if not zero, as the call
was already made before starting with hash calculations.
2) In case Bloom Filter detects it as Malicious, we probably let the user visit the site, and meanwhile send a request to server for a confirmation. And now, if the server confirms it as Malicious, do we now show a notification to the user? Because as it was a Parallel call, the user might have already visited the site before a response came from the server.
@@NitishSarin I would go for second one, may be don't page load or rendering. Before we are sure about the URL
Great explanation!!
amazing explanation.
Thank you Narendra.
Thanks . This is a great topic.
Thank you very much for in detailed informations. Can you please also do for hyperloglog.
nicely explained .thanks.
Nice explanation
Thank you! Great video, keep it up!
Excellent work, thank you!
can we use trie for the username search scenario ?
Can a bloom filter created by inbloom be read using pybloomfiltermmap or pybloomfiltermmap3 @Tech Dummies Narendra L
how did you get 2 . 6,4,10 please explain how i can use cat to generate the figures from the harsh# function
Great sharing, very clear
great explanation!
You are a hero!!!
You can't use "hash collision" as a con of using hash table here. Because the consequence of "hash collision" in bloom filter is also bad (ie. resulting in smaller and smaller confidence level). Thus further saying a possible addition of diskIO here, is just not comparable..
A question. Let's say that I have 100 elements. I will have to create a bit array of length 100 for implementing bloom filter. Let's assume that I have 3 hash functions. Now, initially all the values in the bit array are set to zero and as I get a element to verify if its in the database, I pass the new element from 3 hash functions and thus I get 3 distinct values. In best case scenario, I will have all my indexes filled with 1 after checking 33-34 elements. After that whichever element comes, and I pass that element from 3 hash functions, there is a very high probability that the indexes are already set to 1 and thus even if the element is a non-existing one, it gets rejected. I understand that bloom filter is probabilistic, but in the above example, after 33-34 elements all the elements will be rejected as the bloom filter is completely filled. This seems to be very inefficient. Can i get some help?
How does this work in distributed systems?
Not present probability for 'len' in your example?
What about the key deletion ? Suppose if we remove the DOG, so 10 needs to remove, and next time if we search for RAT, we might get a wrong answer.
Bloom filter is used for the use cases when there is no deletion. It's a prerequisite for bloom filter.
If you're ok with taking up more space, you can keep a list of numbers and increment each index by 1 each time an item is added and decrement each index by 1 each time an item is removed. This concept is called reference counting.
why would you read hash entry from DISK ?!!! Generally it will be in Memory ! Collision may be there but if a good hash function is used then it is not a big issue.
Thanks for great content
Amazing concept and great explanation.! Thank you.
comparisons take O(n) or O(m+n)?
Hello sir, Very good explanation
Sir can u start a series of Top 100 data structures interview question and their expalantion.
Thanks lot , nice explanation ...
Do you mean 1 error in 10 million requests or 1 error in 10 million inputs? Will the same input different queries result in same false positive?
what about Set data structure and its add method (java) ?
Awesome content !!!1
Thank you! Great video!
Great, thanks 👌
We dont query bloom filters rather we take help of filter to check of data exists in datbstructute or not
superb bro !!!!
Thanks, Naren!! Interesting concept of a probabilistic data structure.
Are there any more such data structures exist?
Yes, e.g., count min sketch
how to insert into hash function and index number generated
Nice job buddy.
I have a question:
Just imagine that I have a list or table and whenever I add a new item I calculate the two hash and keep it in binary array exactly same as what you explained.
But when I remove one item logically I should 0 the positions in the byte array, right?
The problem comes when the position is shared with another keyword.
What should do in that case?
There is some rules in bloom filter. You should not delete the item since some of the item are sharing the bits.
@@Ramesh-ks4er So we should keep some other flags as shared?
What if we stored lots of data and the BitArray holds all "1"s?