A Trie is specifically designed for searching prefixes. The challenge lies in its unique data structure(linklist + hashmap). If you can remember the structure and how to use it, you'll have a breakthrough. The algorithms for a Trie itself are incredibly simple.
I found a solution that involved sorting both arrays lexicographically, doing a prefix comparison between the pairs starting with pointers at the beginning of both arrays, and then incrementing the pointer of the number that was smaller. I think the time complexity of that would be O(mlog(m) + nlog(n)), but space complexity would be O(1). Interestingly though, I got a faster run time doing it this way, but that may have been more to do with the luck of the run.
Technically, with tries we do the same ! Helps us to sort the values lexicographically and move the trie ptr to the subsequent trie objs if there is a match by tracking the longest number with the match.
In the real world, you probably have a SQL server for querying string match so the hashmap solution is in fact one of the ways to design the database for partial string search.
the trie solution is more intruitive after the recent problems, but yeah your right the hs solution is more efficient and the editorial is kinda miss-leading and has some typos i think they just don't care anymore
It’s not in O(n^2). Your making a pass through one of the lists to build its hashset, then you iterate through the other list to check all possible prefixes for each num. Getting all possible prefixes of a number is considered constant.
13:23 I think they are right and you missed some math. I stand to be corrected butthe number of digits in a number is not given by its logarithm to base 10 but rather (1 + its logarithm to base 10). For example, 100 has 3 digits because its logarithm to base 10 is 2.
Thanks for your video! Here are a couple of oneliners: class Solution: def longestCommonPrefix(self, a: List[int], b: List[int]) -> int: return len(str(max((f:=lambda e:{v//10**i for v in e for i in range(9)})(a)&f(b)) or '')) class Solution: def longestCommonPrefix(self, a: List[int], b: List[int]) -> int: return max(map(len,and_(*({s[:i] for s in map(str,e) for i in range(len(s)+1)} for e in (a,b)))))
optimization "n not in prefix_set" is actually redundant, since its hashset when we try to add an element which is already present in the hashset, this check will be executed internally. So, this doesn't actually optimize the runtime.
"I don't know what they were smoking" that got me ROFL
True!!!🤣
Bro! The walkthrough was great! Boosted my confidence
i solved this problem with trie. thank for you solution!
This one just came up in my OA today, only if I had saw this earlier
Same unfortunately. had settle for half credit with brute force.
A Trie is specifically designed for searching prefixes. The challenge lies in its unique data structure(linklist + hashmap). If you can remember the structure and how to use it, you'll have a breakthrough. The algorithms for a Trie itself are incredibly simple.
Sets already do not contain duplicate right
Yeah that's right, but it's still worth checking because then we can stop the loop early.
LFG right when i needed it
amazing approach, thx!
I found a solution that involved sorting both arrays lexicographically, doing a prefix comparison between the pairs starting with pointers at the beginning of both arrays, and then incrementing the pointer of the number that was smaller. I think the time complexity of that would be O(mlog(m) + nlog(n)), but space complexity would be O(1). Interestingly though, I got a faster run time doing it this way, but that may have been more to do with the luck of the run.
Technically, with tries we do the same ! Helps us to sort the values lexicographically and move the trie ptr to the subsequent trie objs if there is a match by tracking the longest number with the match.
In the real world, you probably have a SQL server for querying string match so the hashmap solution is in fact one of the ways to design the database for partial string search.
the trie solution is more intruitive after the recent problems, but yeah your right the hs solution is more efficient and the editorial is kinda miss-leading and has some typos i think they just don't care anymore
bro PLEASE tell me what you used for that dsa tree on your website you did mention it once but i forgot what it was
What position did you apply for in Google?
Great explanation
isn't its time complexity is O(n^2)?? and the constraint was of 5*10^4 then how is it possible to do it with O(n^2)??
Pls anyone help
🙏
It’s not in O(n^2). Your making a pass through one of the lists to build its hashset, then you iterate through the other list to check all possible prefixes for each num. Getting all possible prefixes of a number is considered constant.
13:23 I think they are right and you missed some math. I stand to be corrected butthe number of digits in a number is not given by its logarithm to base 10 but rather (1 + its logarithm to base 10). For example, 100 has 3 digits because its logarithm to base 10 is 2.
just make a trie with the numbers using digits and find the prefix lengths
Those were my initial thoughts as well
Thanks for your video!
Here are a couple of oneliners:
class Solution:
def longestCommonPrefix(self, a: List[int], b: List[int]) -> int:
return len(str(max((f:=lambda e:{v//10**i for v in e for i in range(9)})(a)&f(b)) or ''))
class Solution:
def longestCommonPrefix(self, a: List[int], b: List[int]) -> int:
return max(map(len,and_(*({s[:i] for s in map(str,e) for i in range(len(s)+1)} for e in (a,b)))))
Solved it!
I solved this on my own using Trie. 50% runtine efficient but only 5% memory efficiency. Happy that I solved it without any bug.
u the goat
We can also get max number if not 0 then convert it to string to get length in the end works too
👍👍
nice thumbnail
First
my ex has also become very large
optimization "n not in prefix_set" is actually redundant, since its hashset when we try to add an element which is already present in the hashset, this check will be executed internally.
So, this doesn't actually optimize the runtime.
It will terminate the loop earlier, so we can avoid computing hash of the key more times than needed
Just benchmark the code and see the difference for yourself