Here’s my solution code to this problem in Python and Java: www.csdojo.io/problem Also, for improving your problem-solving skills, as I mentioned in the video, I recommend the following two pieces of resources: - 11 Essential Coding Interview Questions (my Udemy course): www.udemy.com/11-essential-coding-interview-questions/?couponCode=PROBLEM - Daily Coding Problem (a website that’s run by a friend of mine): www.csdojo.io/daily See you guys in the next video!
Hey Cs dojo, your videos are very much different from other online tutors. I loved the python tutorials. I wish you'd make C programming tutorials too. Hoping to hear from you soon.
What you can do is replace every element in array A with its conjugate. Then, for every element in array A, you do a binary search in a sorted array B, to find the closest possible number. This will allow you to reach an O(n*logn) solution with only O(1) space.
To everyone who understood this video, I'm happy for you. I'll get there soon. Update: I'm working as a Software Engineer in Toronto. Keep working hard, one day it'll pay off 👊🏼⚡️
Tip #1: Come up with a brute-force solution - 1:23 Tip #2: Think of a simpler version of the problem - 2:34 Tip #3: Think with simpler examples -> try noticing a pattern - 5:54 Tip #4: Use some visualization - 10:10 Tip #5: Test your solution on a few examples - 15:09
Just a thought. The question is that if the interviewer has not seen or practiced a given problem, will they still be able to solve and evaluate a candidate? The problem of these kinds of interviews is that the interviewer and candidates are not on equal ground. When the interviewer clearly knows the answer, it becomes somewhat biased when they attempt to judge their candidates (by asking things like, can you think of a better solution to this and so on, notice the keyword "think" not "recall"). I guess my point is that I don't quite believe the interviewer can always improvise a solution to problems such as finding the total number of subsets of integers that add up to a number or a cell automata problem with lots of recursions. I believe they can probably solve it by sitting quietly by themselves and tackling the problem without a time limit or without people watching them, but the question is that can they "think from the scratch" themselves? Do you code under pressure with people watching you and judging you? You don't see this kinds of interview in other industries such as EE, physics, chemical engineering, etc., because there's no way to ask you to design and implement a VLSI chip that functions a certain way or design a quantum mechanical system on the scene. But somehow in CS, this is convenient. Very often you interview for a machine learning position but people only, or to a large extent, focus on trickery coding problems; almost as if statistics and math are not as important; kind of going backwards, imho. Let's have an open research question, so that neither the interviewer nor the candidate has a viable solution at the moment. Then, we both try to solve the problem or come up with a tentative solution, in which case you'll also get to see how the candidate approaches the problem, their patience, their analytical capacities, personalities and so on. But at least, this interview process would be less biased. When people practice a lot of these coding problems and then go ahead to have an interview, they are really just "recalling from memory" on how to solve certain problems but you are not really testing their "analytical abilities."
The bias you're [rightly] referring to is irrelevant. You're not competing against the interviewer, you're competing against other candidates. So you shouldn't care how biased your interviewer is to that particular problem 'cause he applies the same bias to all candidates (in theory). Therefore, it doesn't matter. What this does or doesn't test and/or how it correlates with candidate's qualities is another question. But the playground is equally fair (or unfair) for all candidates.
You guys are not newbies like me that’s why this things are coming on your head but Those who are sitting in at big companies as an interviewer are as good as ever could be , believe it , And you have to be that good to be a member of that big company
Hey, your not stupid. These are things that are new to you. Your more than capable of learning this. Just break it down, bit by bit. Be able to teach yourself in way that makes sense to you.
Here's a solution that is also O(nlogn), but faster than the one you provided: Sort array 1 (nlogn). Now, for each value x of array 2, conduct binary search on array 1 to find the element closest to (target -x). This should be nlogn as well. Your solution requires 2 sorts and extra processing, but this one only requires one sort.
The basic set solution only works on the simplified problem where we're looking for exactly the value. When we're solving the real problem the set solution is O(x*n) which might end up being way bigger than O(nLog(n)). For example with simple 1 element lists {500}{400} and target sum of 10, the set algorithm would take almost 900 cycles to calculate the answer.
The time complexity is O(x*n), but x can be arbitrarily large. In the third solution, it really depends on the sorting algorithm used. If you use mergesort or heapsort, it'll definitely be O(nlogn) but space will also be O(n). If you use quicksort, worst case is O(n^2), but space is O(1). In my opinion, you can't objectively say which solution is better without having an idea of what x would tend to be. For small x, the second solution is better. For larger x, the third solution is better.
That's actually a very good idea, I haven't thought about, great video)). But why not use binary search in this problem? The complexity would also be O(n log n). You parse through the unsorted array and start binary search on the sorted one to find the closest sum. Roughly speaking, the complexity would be 2 * n * log n, since you only sort one array and use binary search with the other one for n log n. The concept of binary search can be used in a variety of other problems, but I'm not really sure if this can be applicable somewhere else. The idea in the video looks like semi-dynamicProgramming.
That's what I came up with as well. But it will probably be slower, because performing the second `O(n log(n))` operation will eventually be slower than a `O(n)`. But it's still `O(n log(n))`, so maybe it's not that bad.
I have the same doubt (while I am in an interview)..... The number in set is O(n) , I feel it should still be O(n^2), θ(n) maybe? since the number in set is θ(1)
If u hash the numbers of first array, then it will be o(n). If u put it in a set, have to use something like lower_bound() instead and additional checks,.. hashing would ensure o(n).
According to stack overflow, ad, remove and contains on a hashset can be done in O(1). So it's O(n) to add them, O(n) to search the other array => O(2n) = O(n)
Tip#2 It won't be O(n), since for each number in first array, we calculate the reminder and look at each number in the second array. Thus, it is same O(n^2) as the brute force solution. Actually, its quite a bit worse, since we actually have x * O(n^2) As for the final solution, vizualization is nice and all, but it is actually quite a bit easy to solve this without it. Just sort both arrays, one from small to big, and the other array - the other way around. Start from 0 on both of the sum, if sum is bigger then searched number, get next value on the second array. Otherwise, get value from the first array. Remember the sum at each step, and print the closest pair. Boom, problem solved. O(n*log(n))
These tips are amazing!! The final solution amazed me, I just recently failed an interview for an intership at one of the big 4 and im determined to study at least 2 hours a day for my next one, I hope you make more videos like these!
Another way: Sort the first array (a1) in ASC order, while sort the other one (a2) in DESC order. start with i1 = 0, i2 = 0 (i1 as index of first array, i2 as index of 2nd array) Check a1[i1] + a2[i2], if it's less than given number, then "i1++"(to pick a larger number), and if it's greater than the given number, then "i2++"(to pick a smaller number). You will get the answer with following complexity: Sorting: O(n*logn), twice Iteration: O(n), twice (worst case)
not sure if i get #2 - create a set, yet the set is still a list which you need to compare it with individually which is still n^2, did i misunderstood a set?
The cost of looking up any value for a 'set' data structure is O(1) time. You would just ask the set if it contains a specific number(ex: set.contains(5) ), versus iterating through the values in the set.
Regarding your last solution: if there are two equal numbers in one of the arrays you start with then I think this approach could lead to a problem. In your grid-like visualization there would be a line (or row) where there are two equal numbers next to each other. Now say your target number is 17 and you hit a spot where there are two 16 next to each other you could ignore the second 16 that sits to the right of the one you are checking - so i think when you ignore the space next to the number you were checking you have to first check if the neighbouring numbers are really smaller or am I missing something here.
Maybe I'm wrong, but there wasn't any consideration for the complexity added for sorting in #3 or hash table lookup in #2. In this case, the change to the solution's complexity is negligible, but you can't forget to consider those steps.
My Tip #2: Find the inefficiencies in the brute force solution and see if you can optimize them. In this case it's the checking of the second array for each element in the first array, which can be optimized by sorting the second array and using binary search (I haven't watched the entire vid yet so I don't know if that's the intended solution or complexity). Edit: Huh so two pointers may lead to an O(N) solution (after sorting)! Two pointers and binary search seem very related!
So funny isn't it. Right 2 years from now. I left a comment here from some of my other accounts saying DP is so hard and maybe coding isn't for me😢. Now when I'm back here. 2years of wholesome practice and grinds on CF CC LC, I'm just getting each and every word he's saying like ABCD...Heck, I'm even getting better ideas than what he's conveying here. If you're demotivated. Just don't. Keep practicing. I, personally, am not very bright, academically. I take hours, days to digest easy basic concepts and that's alright. As long as you get it at the end, it's a win win for you. Keep grinding. Keep hustling. Don't cry. Don't despair. We ain't the same. Some take minutes to understand segment trees, while people like myself took whole month to even get the idea of its working and the logic behind its creation and its usage in problems. Just keep hustling. I know you'll get there BTW, did I tell you, I ain't a CS major. I'm studying Physics. Which I hate with all my passion. Maybe you might consider reading the only comment, which is mine btw, on this video. ruclips.net/video/6MPP_MqS0WA/видео.html
You made it complicated. A simple solution is just sort the array and use 2 points one for 1st and one for 2nd and find the sum and store the difference and number. Now increment the pointer which have small value. Repeat the process if the current diff is less our diff update the diff and result numbers. That's it. No need to do extra calculations.
You could have solved the problem with the second method in O(nlogn) time if you used a BBST, and binary search for floor and ceiling values. Since we don't actually need the index of the values, we can use a TreeSet in Java (red-black tree) to implement this idea without it being too bulky. This saves from all the ArrayList sorting shenanigans.
More than 90% of newcomers to programming learn from tutorials on youtube or other similar platforms like Coursera... However, most of us when finally feeling confident about our skills in whatever programming languages when choose to learn and try to start our own project, we get these overwhelming ideas, functions, functionalities, and components that we're positive we're gonna need in our project but somehow we lack the skill of gathering all of that and putting it together for a final product. We DON'T EVEN KNOW WHERE TO START though we feel like we understand everything. I took some time and put some effort into researching this and I figured it out!! WE LEARNED IT THE WRONG WAY!!! I've talked about it briefly in my first video in a long tutorial series here on youtube: ruclips.net/video/oVd3NivzZx8/видео.html
YK, Thinking out loud. In an interview setting, it would be intimidating to approach a problem, thinking out loud. It takes experience and practice to master the technique. Your explanation is so valuable, learning how to think and what questions to ask. Here’s an idea. Take leetcode site, there are over 700 problems, you solve one problem a day, show how you’d approach the problem, how to think, what questions to ask, how to optimize... I think that’s so valuable. Who knows by the end of the year, you can just gather up all those videos and create another Udemy class, I think lots of people would appreciate that
No with tip 2 he only iterates through the list. So he takes the first element of the list and sees what sum makes 24. Let’s call that number x. So x+ first element= 24. U can use set properties to search if x exists in the set. And if it does u have found the answer. Which is O(n) Bc u iterate through each element to see if there is a x that makes this true. . Hope this makes sense.
it’s useful when you practice solving algorithm questions, but I doubt how much time you have to go though this kind of thinking process in actual interviews, especially considering that interviewers usually have prepared two questions to ask. So practice is still the key. Don’t ever try to rely on the introduced techs to hack an interview
This is not true for interviews I have conducted. I would rather get a candidate who thinks through a problem and reaches even a brute force solution to something they have never seen before. Sometimes you can tell that a person has practiced the question and simply memorized a solution. Gaps will start to show when you ask questions on basic concepts on their target language that are not related to solving problems. Practice is important but most professionals don't have the time to sit and run thorough programming problems in their free time. But you will have amassed many skills just from working full time and will feel more conformable showing up to interviews without doing the overnight cram sessions I used to do in college. And yes, there will be times when you can't answer the question. I have been there multiple times (in fact, with problems similar to the one in the video). And that's okay, that just means you're not a right fit for the job. I also don't want to reach a position in which I realize later on that I don't have the technical knowledge to contribute well. It doesn't feel good.
Just sort both arrays, from one array pick one element at time and using binary search, search for a other half of number in another array. You can write 2 binary search for this, one should return the exact number or the closest number higher than it and second one should return the exact or closest one smaller than it. Actually you only need to sort the one where binary search is being done. Overall complexity is O(nlgn). In the end what you came up with is very similar with this, and visualizing problems and solutions is a very important skill. Anyway, I wrote this, just to show that there are some other ways to think about such problems.
he states at the end the time complexity is O(nlogn) and space complexity O(n) "assuming that you use an nlogn sorting algorithm" i.e. mergesort or quicksort or te like. mergesort is of time complex O(nlogn) and space O(n) thus we can assume he is accounting for this correctly.
Just off the top of my head (and I didn't actually implement this solution, so it may not work). Sort both of the arrays, and have a pointer point to the first element of the first array, and another pointer point to the last element of the second array. Grab the sum of the two elements holding the pointer, if the sum is too low move the pointer on the first array to the next element, if the sum is too high, then move the pointer on the second array to the previous element. Continue this procedure until the pointers criss-cross, i.e. while(secondP > firstP), MEANWHILE keep track of the lowest difference so far between the sums and the target, and their indexes.
Can't we sort only one of the arrays and then perform binary search on the array for each element of the other array. Time complexity still remains o(nlogn).
Nice idea, but I think in this case the visualization makes the problem a bit more complex than it is. I came up with simple short solution (C++) with runtime complexity O(n log(n)) and space complexity O(n). In my solution arrays can have different sizes. pair sumUpToTarget( vector &v1, vector &v2, int target) { if ((v1.size() < 1) || (v2.size() < 1)) return {}; set s(v2.begin(), v2.end()); pair p = { {*v1.begin(), *s.begin()}, std::abs(target - (*v1.begin() + *s.begin())) }; for (auto elm : v1) { auto pos = s.lower_bound(target - elm); if (pos == s.end()) pos = prev(s.end()); if (abs(target - (elm + *pos)) < p.second) p = { {elm, *pos}, abs(target - (elm + *pos)) }; } return p.first; }
You don't really need those exact insights to solve the problem, it's just that visualization can help. He was basically brute force getting the insights. You could have just as easily found the answer by just thinking how if you sort the data you could eliminate a lot of checks, because if you know that array1[i] + array2[j] > target then there is no point checking the value of array1[i] + array2[j+1]. When you sort the data and look at it {1, 4, 7, 10} {4, 5, 7, 8} with target 13 you could notice that 10 + 4 is bigger than 13 so there is no point checking any other number with 10 so we move to 7. 7+4 is less so move forward. 7+5 less so keep going. 7+7 >13 so no point going forward. Move to 4 and etc until you need to move left or right, but there are no more elements. Then just return the answer you hopefully kept track of during the looping.
If you were given two arrays of 1000 where O(n^2) is crap it MIGHT push you in the spatial solution direction but good luck implementing that on a whiteboard lol on hour three of an interview loop
Thank you! The udemy videos are perfect. It's the only udemy course I actually used.... sadly, such ambition, yet no follow-through. I have a very important interview coming and I feel nervous bc I'm competing with people who were formally trained in CS. I just learned everything on the fly with no structure. I didn't even know there were algorithm types or what a binary search was. I would do them all the time bc I have a physics background so I understand optimation but I never learned the names. After seeing the different types of algorithms I feel like I'm starting with some sort of foundational idea of how to do it. I thought binary search meant you put a ) if it's not what you were looking for and a 1 if it was. lol
Wait, at 4:08 this is still O of n squared, since you have to check each number from the first array for every number in the second array. And you can't search the first array in less then O(n), so that's still gonna be O(n^2)
I agree. I believe that with any solution on an unsorted array(s), you will still need to check every element. With them being of the same fixed size, that's n^2. This also caused me to pause and then I did a kind of deep dive on this problem. Even with the elements sorted, a brute force algorithm is n^2. Consider:: desired: 70 array1: [50, 51, 52, 53, 54] array2: [20, 21, 22, 23, 24] Where we start at the end, or consider a desired number of 78 or greater where we start from the beginning, brute force is o(n^2). So we would like better if possible. Other people have posted the same solution, but typing it out helps me :-) I do like to start with how sorting the data will help me. My thought was with the arrays sorted you can now iterate array1's elements(length 'n' times), then binary search array2 for a number that adds up to our desired number, storing off the closest sum as we go. The binary search on the sorted array2 reduces all array1 operations (n) on array2 to log(n). So we have nlogn. A bit irrelevant to time complexity but now that I think about it, only one array needs to be sorted.
What exactly is the set? Is that a hash table? Does that mean it’s constant time lookup? I’m just confusion on how the runtime is linear for the 2nd version of how to solve the first problem.
The reason for putting the data in a set is because a set is build on a hash table, than you can accessed data via a hash table. The look up on a hash table is quicker than looking up on a array. For an array you have to iterate over the list while a hash table will be able to find the item you are looking for right away if it exist. In python at least this is how set are built. you will still need to iterate over one of the arrays so I think that makes the solution is linear I believe.
For your second approach, you will require O(nlogn) . Suppose you have two arrays A and B with n elements and for each element in B you perform binary search O(logn) searching for the other number of pair in A. For n elements in B, the complexity would be O(nlogn)
But we can't use binary search since the two arrays are unsorted. So the second approach also would yield O(n^2) time complexity(Please correct me if wrong)
@@vedantsharma5876 You can first sort both arrays in O(nlogn), then do the binary seach for each element in B, which would be also O(nlog), since each binary seach takes O(logn). So, in the end, it should be O(nlogn). If the arrays were already sorted, doing the algorithm from the video would only take O(n), while binary seach for each element in B would still take O(nlogn).
i would say its O(n*m) where m is the value of the target. You can think about corner case - where target is huge, like 1000000000, and all the numbers are small - 0, 1, 2 etc. You would have to iterate 1000000000 times to get the answer
My solution to this would be to sort just 1 of the arrays. Then for each node of the 2nd array do a binary search into the first array of the 'compliment' (aka the number that would directly equal the target). This should get the closest to that. From there you just keep track of the minimum absolute value of the different from the target. This is also an O(nlgn) solution.
If you like Html & css webdesigning ruclips.net/video/o_guxLAoLYY/видео.html ruclips.net/video/PqrZrvaObZk/видео.html plz subscribe & comment on my video
Good sales pitch. A simple solution: You could associate each value to each bit of an integer (12 bits, 6 MSB is first array, 6 LSB is second array) in total going from 0 to 2^12=4096. Add all the values associated with a 1. This would go through all the combinations at light speed.
5:30 think every language dont have function to check particular element is there in the list or not . In c it is not there so u have to compare against each element. its o(n2) . 8:45 if sort first then u have to consider the sort complexity o(n2) or log n
right, he is not a programmer at all, checking in the set also takes n actions but he talks about it as if it was nothing. He said other silly things, too. If I am wrong, explain to me, I'll accept it.
Here is the solution I came up with with pyhton: def sum(x,number): start = 0 end = len(x) - 1 base = x[start] + x[end] while start < end and end > start: if base == number: print (str(x[start]) + " and " + str(x[end]) + " adds up to " + str(number) ) return True elif base > number: end -= 1 base = x[start] + x[end] elif base < number: start += 1 base = x[start] + x[end] else: print ("No pair in these set of numbers add up to "+ str(number) ) return False
A very nice explanation of the thinking process. But another trait of a good programmer is knowing his data structures. Java, for example, has TreeSet with an O(logn) time complexity for ceiling/floor methods. And that would come straight from your "simpler version of the problem". So, for those of us not so bright as to come up with the original solution, it's worth to learn the proper tools our languages provide.
This can be done in O(n) time. Go over first list, create a new list with required numbers to reach target, go over second list and compute Math.abs() to determine how close you are to target. If you are closer than previous one, mark this tuple in a list. After iterration of second list you know exactly how close you can get with how many tuples. No need to sort, if you sort you get in to n*logn order. Only problem in this solution is you need O(n) memory.
It refers to the complexity of the operation. That is, in terms of the size of the input how much effort or memory would it take to run. For example, the first naive solution of comparing every pair of values from each array had a time complexity of O(n^2). In this case, for input arrays of size n, there are n^2 possible pairs of values that take one number from each array. For each possible pair, we would need to calculate the sum of the pair, and compare that value to the target. This matters because it means that as the size of the input arrays grows, the number of pairs, and so the number of operations we must perform grows faster than n. Imagine comparing two arrays of size 4, like in the video; even a brute force solution would only require us to calculate 16 pairs, not too bad. Instead imagine that the arrays had 20,000 elements each. Now our brute force solution requires us to calculate 400,000,000 pairs, and it will only get worse as the input arrays get larger. To try to sum it up, the complexity of an algorithm refers to how the amount of work required to run the algorithm scales with the size of the input. Time complexity generally refers to the number of distinct operations we must perform. Space complexity generally refers to the amount of memory needed to store information. For example, if the final solution in the video, the matrix used to visualize the problem was of size n by n, and had n^2 cells. So if we tried to compute and store the value of every cell, the time complexity would be O(n^2) (number of pairs we have to check) and the space complexity would also be O(n^2) (we need a matrix with n^2 cells). This means that in the implementation of this algorithm, it would be important that we not actually create such a matrix, but instead find a different way to store any information we need, and only compute the values in the cells we check starting from the top right. There is more to it, specifically that there are actually two notations here "O" (big-O notation) and "o" (little-o notation). Both of these notations deal with the upper bound, or worst-case scenario of the problem. Big-O notation is what people generally focus on, and so it if someone says just "O" or "O notation" they usually mean big-O. (There are also Omega notations that deal with the lower bound, but that usually isn't as important) If you are curious, this page will probably be more thorough than I was here: en.wikipedia.org/wiki/Time_complexity
In case anybody else reads this in the future, the easy way to think about it is: for 2 arrays of 5 elements each, brute force = 5**2, final solution is (5 * log(5)) * 3 (2 sorts, then the final search), where the brute force comes out to 25 and the final solution comes out to 24.1416, which is pretty close, right? Then if you look at 2 arrays of length 1000, you'd have 1000**2, which is 1,000,000, while the final solution would be (1000 * log(1000)) * 3, which is 6907.7553, far less than a million. It only gets worse from there. ;)
For the second algorithm (with a set) the complexity will be O(x*n*log(n)). First, you need to fill the set. Inserting an element in a set is O(log(n)), inserting (n) elements in a set is O(n*log(n)). Second, Because you iterate through the array (O(n)) and you search in a set (O(log(n))) for each element in the array (which gives us O(n*log(n))). And you do it (x) times. PS: On the other hand if you use hash table for the set search for an element should be O(1) (if there is no hash collisions). So, probably be yes. It could be O(x*n).
After creating the table, better way to find the closest no is binary search in my opinion. Because in your examples the closest no was always somewhere in the middle, but what if it is the last box, binary search will do wonders in this case.
Another great way to solve this problem: 1. Sort 1 of the arrays O(n*log(n)) 2. Make that sorted array into a balanced binary tree (this is O(n) complex) 3. for each elem in the other array: DFS search binary tree for closest pair 4. DFS method is defined as follows if bTree.root + elem is closer to the target than the current best pair then (bTree.root, elem) becomes the best pair if bTree.root + elem < target && bTree.right.nonEmpty then DFS(right) else if bTree.root + elem > target && bTree.left.nonEmpty then DFS(left) The DFS lookup for the closest pair for a single element takes O(log(n)) time. Repeating for each element in the second array gives us O(n*log(n)) time
Exactly. This is much better, because if arrays have different length, you can only sort the small one. And even if they are the same, real world complexity is better.
This is brilliant. Thank you so much for this. Easily the most detailed and clearly articulated and digestible, general approach to solving algorithm problems.. While watching youtube videos on algorithms, so many of the people solving them just seemingly grab the solution out of thin air. No doubt this is due to them doing a ton of algorithms in the past, recognizing patterns and reaching for solutions or similar solutions that worked for them in the past. The problem with this for people newer to solving algorithms is as I mentioned - this seems to come out of thin air and doesn't help, because it doesn't show the thought process behind what came up with the original pattern that they're grabbing for.. hope that makes sense and thanks again for the video.
Hey YK, I dream of working in big tech companies like Google, Facebook, Apple & Microsoft. So I used to search this topic in Google. Then I watch your "How I Got a Job at Google as a Software Engineer (without a Computer Science Degree!)" video. And from this I got the hope then I started learning proagramming. But on Google's Career site I saw that every Software Engineer job Computer Science degree must be. But I don't have the Computer Science degree. What can I do now ? Do I expect to get jobs in these tech companies ? Please help me by answering this question. Thank You in Advance.
I wouldn't think that companies would be interviewing. It seems to me that candidates are doing the interview now. What benefits do you offer? Why is this company hiring now?😂
Google Interviewer : We have a problem try to find an answer. Me : First solution ...DoS attack Sir. This will make sure the problem doesn't exist at all. I hate problems. You know ?
That was what I thought of immediately, but on second thought, his solution might be better. For just one solution both solutions would have a time complexity of n*log n, and binary search might actually be faster since sorting requires a lot of memory write. But if you need to use the same two arrays to find multiple targets his solution would have an amortized complexity of n. Not to mention it looks much more elegant.
I am so discouraged as I could not wrap my head around this "simple" problem... Not being able to solve this kind of thing drives me crazy as I love to code. But I am horrible at things like this. Help? How can I improve myself in this regard?
Python one row solution foo =lambda a1,a2, val: sorted(list(map(lambda x: (x[0]+x[1]-val,x),[(x,y) for x in a1 for y in a2])), key= lambda x: abs(x[0]), reverse=False)[1] run like this: print(foo([-1,3,8,2,9,5],[4,1,2,10,5,20],24))
This is a solution returns all of the pairs. The differences to CS dojo’s answer is it stores the answers in a dictionary (hashmap) and doesn’t stop if it found a solution that matches the target. ```python def solution(arr1, arr2, target): xs = sorted(arr1) ys = sorted(arr2) i = len(xs) - 1 j = 0
total = xs[0] + ys[0] diff = total - target distances = {} distances[abs(diff)] = {(xs[0], ys[0])} smallest_diff = abs(diff) while i >= 0 and j
Im new to programming, my first thought was to get the absolut of target - sum(pair) and compare if i get a smaller one. #Python Kinda like: t = target #algorithm to find a pair if abs( target - (pairval_1+pairval_2) ) < t : t = abs( target - (pairval_1+pairval_2) ) x,y = pairval_1, pairval_2 if t == 0 : return x,y return x,y
Here’s my solution code to this problem in Python and Java: www.csdojo.io/problem
Also, for improving your problem-solving skills, as I mentioned in the video, I recommend the following two pieces of resources:
- 11 Essential Coding Interview Questions (my Udemy course): www.udemy.com/11-essential-coding-interview-questions/?couponCode=PROBLEM
- Daily Coding Problem (a website that’s run by a friend of mine): www.csdojo.io/daily
See you guys in the next video!
Hey Cs dojo, your videos are very much different from other online tutors. I loved the python tutorials. I wish you'd make C programming tutorials too. Hoping to hear from you soon.
witch software do you use for the presentation?
What you can do is replace every element in array A with its conjugate. Then, for every element in array A, you do a binary search in a sorted array B, to find the closest possible number. This will allow you to reach an O(n*logn) solution with only O(1) space.
18:07 check out projecteuler. It has 645 problems that can be solved and once you solve a problem you can see how others solved it. IT IS FREE!
@@cookeemonstahz it's o(n) space just to store the array.
To everyone who understood this video, I'm happy for you. I'll get there soon.
Update: I'm working as a Software Engineer in Toronto. Keep working hard, one day it'll pay off 👊🏼⚡️
i dont understand too
You got me but we are half way there okay! Haha stay curious!
I'll be there after you
Same will soon get there
Hopefully
Besides everything related to coding and thinking, I really like the way that on clicking the dotted box bursts to display what's inside.
the same thing with me ~،،
any idea what tools/apps he is using to show the effects?
i hope the same thing happens to me when i start to code
Poo L idk. He might be the one who made that tool.
😂 😂 😂 😂 yea Simran.. lets just miss every imp. coding info. and lets focus on pop up boxes 😂..
Tip #1: Come up with a brute-force solution - 1:23
Tip #2: Think of a simpler version of the problem - 2:34
Tip #3: Think with simpler examples -> try noticing a pattern - 5:54
Tip #4: Use some visualization - 10:10
Tip #5: Test your solution on a few examples - 15:09
nice
Underated hero, cheers mate
kitna vella hai be ye
@@neensta5404 Not gonna lie that was kinda aggressive...
Doing God's work
The moment your best solution is the brute-force one.
1:32 This pair, Dis pair, Despair.
i cried laughing broh
Yes, the guy is just noob
In first example he has shown a list with negative numbers, so his last method will not work
He will just fail an interview
@@maksimbeliaev5339 u suck
Maksim Beliaev you know that he's an ex-google software engineer right?
And that means what exactly?
Just a thought.
The question is that if the interviewer has not seen or practiced a given problem, will they still be able to solve and evaluate a candidate? The problem of these kinds of interviews is that the interviewer and candidates are not on equal ground. When the interviewer clearly knows the answer, it becomes somewhat biased when they attempt to judge their candidates (by asking things like, can you think of a better solution to this and so on, notice the keyword "think" not "recall").
I guess my point is that I don't quite believe the interviewer can always improvise a solution to problems such as finding the total number of subsets of integers that add up to a number or a cell automata problem with lots of recursions. I believe they can probably solve it by sitting quietly by themselves and tackling the problem without a time limit or without people watching them, but the question is that can they "think from the scratch" themselves? Do you code under pressure with people watching you and judging you?
You don't see this kinds of interview in other industries such as EE, physics, chemical engineering, etc., because there's no way to ask you to design and implement a VLSI chip that functions a certain way or design a quantum mechanical system on the scene. But somehow in CS, this is convenient. Very often you interview for a machine learning position but people only, or to a large extent, focus on trickery coding problems; almost as if statistics and math are not as important; kind of going backwards, imho.
Let's have an open research question, so that neither the interviewer nor the candidate has a viable solution at the moment. Then, we both try to solve the problem or come up with a tentative solution, in which case you'll also get to see how the candidate approaches the problem, their patience, their analytical capacities, personalities and so on. But at least, this interview process would be less biased.
When people practice a lot of these coding problems and then go ahead to have an interview, they are really just "recalling from memory" on how to solve certain problems but you are not really testing their "analytical abilities."
This is so true. But, sadly a reality!
I like your idea about both interviewer and interviewee do that same problem. That would be awesome.
The bias you're [rightly] referring to is irrelevant. You're not competing against the interviewer, you're competing against other candidates. So you shouldn't care how biased your interviewer is to that particular problem 'cause he applies the same bias to all candidates (in theory). Therefore, it doesn't matter. What this does or doesn't test and/or how it correlates with candidate's qualities is another question. But the playground is equally fair (or unfair) for all candidates.
@@vassilyn5378 you are right it doesn't really matter how hard or easy the problem is if comparison is only against other candiates.
You guys are not newbies like me that’s why this things are coming on your head but
Those who are sitting in at big companies as an interviewer are as good as ever could be , believe it ,
And you have to be that good to be a member of that big company
This just made me realize how stupid I am 😂😂
Hey, your not stupid. These are things that are new to you. Your more than capable of learning this. Just break it down, bit by bit. Be able to teach yourself in way that makes sense to you.
Yes you are (if you compare yourself with someone that have more practice and experience but if you practice, this will be easy for you good luck.)
xxGodx ur harsh...be more positive
@@rsmlifestyle3436 thanks bro
@@rsmlifestyle3436 wow, that's one of the most positive things that I've read online. Thanks for spreading this positivity! Invaluable!
Here's a solution that is also O(nlogn), but faster than the one you provided:
Sort array 1 (nlogn). Now, for each value x of array 2, conduct binary search on array 1 to find the element closest to (target -x). This should be nlogn as well.
Your solution requires 2 sorts and extra processing, but this one only requires one sort.
That's the same solution that jumped into my mind once I read the problem.
Exactly my solution
I have solved a problem very darn similar to this earlier today and this was the solution I came up with too
Closest doesn't mean that the sum of the elements will be close to the target.
You are right. Nice approach.
Nice try.
I could do that if I have 3 hours and preferably without someone in a suit staring at me while I am doing it.
The set solution is O(n) and array sorting is already O(n*logn), so how is it better?
The basic set solution only works on the simplified problem where we're looking for exactly the value. When we're solving the real problem the set solution is O(x*n) which might end up being way bigger than O(nLog(n)). For example with simple 1 element lists {500}{400} and target sum of 10, the set algorithm would take almost 900 cycles to calculate the answer.
@@maxintos1 Yes, you are correct
.
No, it's not. Because complexity for the set solution is actually O(x*n*log(n)). Plus, filling of a set is O(n*log(n)).
@@sanchousf That's not true. Using a hash table backed set is O(1) insertion.
The time complexity is O(x*n), but x can be arbitrarily large. In the third solution, it really depends on the sorting algorithm used. If you use mergesort or heapsort, it'll definitely be O(nlogn) but space will also be O(n). If you use quicksort, worst case is O(n^2), but space is O(1). In my opinion, you can't objectively say which solution is better without having an idea of what x would tend to be. For small x, the second solution is better. For larger x, the third solution is better.
That's actually a very good idea, I haven't thought about, great video)). But why not use binary search in this problem? The complexity would also be O(n log n). You parse through the unsorted array and start binary search on the sorted one to find the closest sum. Roughly speaking, the complexity would be 2 * n * log n, since you only sort one array and use binary search with the other one for n log n. The concept of binary search can be used in a variety of other problems, but I'm not really sure if this can be applicable somewhere else. The idea in the video looks like semi-dynamicProgramming.
I also come up with this solution.
Same!
Here is my solution in C++
void closestPair(std::vector& a, std::vector& b, int target) {
std::sort(b.begin(), b.end());
int a_index = 0, b_index = 0;
int current = a[0] + b[0];
int current_target;
for (int i = 0; i < a.size(); i++) {
current_target = target - a[i];
auto lower = std::lower_bound(b.begin(), b.end(), current_target) - b.begin();
int temp = b[lower];
if (lower != 0){
if (std::abs(b[lower] - current_target) > std::abs(current_target - b[lower - 1])) {
temp = b[lower-1];
lower--;
}
}
if (std::abs(current - target) > std::abs(temp - current_target)) {
current = temp + a[i];
a_index = i;
b_index = lower;
}
}
std::cout
And what about the complexity of sorting the array?
That's what I came up with as well. But it will probably be slower, because performing the second `O(n log(n))` operation will eventually be slower than a `O(n)`. But it's still `O(n log(n))`, so maybe it's not that bad.
Yeah it will work, but it will be a bit more code, and slower.
3:57 - "This solution O(n)" - Are you sure about that???
I have the same doubt (while I am in an interview)..... The number in set is O(n) , I feel it should still be O(n^2), θ(n) maybe? since the number in set is θ(1)
If u hash the numbers of first array, then it will be o(n). If u put it in a set, have to use something like lower_bound() instead and additional checks,.. hashing would ensure o(n).
Finding an element in a set is O(log n) so the overall complexity should be O(n log n).
Wrong explaination its nlogn
According to stack overflow, ad, remove and contains on a hashset can be done in O(1). So it's O(n) to add them, O(n) to search the other array => O(2n) = O(n)
Tip#2
It won't be O(n), since for each number in first array, we calculate the reminder and look at each number in the second array. Thus, it is same O(n^2) as the brute force solution. Actually, its quite a bit worse, since we actually have x * O(n^2)
As for the final solution, vizualization is nice and all, but it is actually quite a bit easy to solve this without it. Just sort both arrays, one from small to big, and the other array - the other way around. Start from 0 on both of the sum, if sum is bigger then searched number, get next value on the second array. Otherwise, get value from the first array. Remember the sum at each step, and print the closest pair. Boom, problem solved. O(n*log(n))
No it will be nlogn since its searching in a set .
You're correct. Tip #2 when he said it would be O(n), I immediately thought: ..............what?
Searching in a set takes O(1) time
are we frnds ?
High quality as always, man. Thank you for your good work. I learn a lot with your videos.
These tips are amazing!! The final solution amazed me, I just recently failed an interview for an intership at one of the big 4 and im determined to study at least 2 hours a day for my next one, I hope you make more videos like these!
Where did you apply mate ?
@@RamizZamanJEEPhysics I went to a hackathon and left my resume, but you can apply online too, just search the name of the company and careers
Thanks mate
If you like Html & css webdesigning
ruclips.net/video/o_guxLAoLYY/видео.html
ruclips.net/video/PqrZrvaObZk/видео.html
Support & comment on my video
Another way:
Sort the first array (a1) in ASC order, while sort the other one (a2) in DESC order.
start with i1 = 0, i2 = 0 (i1 as index of first array, i2 as index of 2nd array)
Check a1[i1] + a2[i2], if it's less than given number, then "i1++"(to pick a larger number), and if it's greater than the given number, then "i2++"(to pick a smaller number). You will get the answer with following complexity:
Sorting: O(n*logn), twice
Iteration: O(n), twice (worst case)
This is the way I did it. Sort both arrays then it becomes a simple 2 pointer pattern problem.
not sure if i get #2 - create a set, yet the set is still a list which you need to compare it with individually which is still n^2, did i misunderstood a set?
A "set" in this case being like a python set. Checking if a specific number in it exists or not is just O(1) after you've made the set
The cost of looking up any value for a 'set' data structure is O(1) time. You would just ask the set if it contains a specific number(ex: set.contains(5) ), versus iterating through the values in the set.
Regarding your last solution: if there are two equal numbers in one of the arrays you start with then I think this approach could lead to a problem. In your grid-like visualization there would be a line (or row) where there are two equal numbers next to each other. Now say your target number is 17 and you hit a spot where there are two 16 next to each other you could ignore the second 16 that sits to the right of the one you are checking - so i think when you ignore the space next to the number you were checking you have to first check if the neighbouring numbers are really smaller or am I missing something here.
Maybe I'm wrong, but there wasn't any consideration for the complexity added for sorting in #3 or hash table lookup in #2. In this case, the change to the solution's complexity is negligible, but you can't forget to consider those steps.
We miss u here at Google!
@@ankitsuthar3025 yes
@@srt-fw8nh well, i wont say it is enough but its very important.. also try to work on your problem solving ability.😀
How is the Google office people say its cool
@@brendapanda244 it is amazingly cool..I love it.. at least u do not get bored!😀😀
Do the coding interviews reflect what you will use in a job?
It's really idiotic idea to do like that
My Tip #2: Find the inefficiencies in the brute force solution and see if you can optimize them. In this case it's the checking of the second array for each element in the first array, which can be optimized by sorting the second array and using binary search (I haven't watched the entire vid yet so I don't know if that's the intended solution or complexity).
Edit: Huh so two pointers may lead to an O(N) solution (after sorting)! Two pointers and binary search seem very related!
So funny isn't it.
Right 2 years from now. I left a comment here from some of my other accounts saying DP is so hard and maybe coding isn't for me😢.
Now when I'm back here. 2years of wholesome practice and grinds on CF CC LC, I'm just getting each and every word he's saying like ABCD...Heck, I'm even getting better ideas than what he's conveying here.
If you're demotivated. Just don't. Keep practicing. I, personally, am not very bright, academically. I take hours, days to digest easy basic concepts and that's alright. As long as you get it at the end, it's a win win for you.
Keep grinding. Keep hustling. Don't cry. Don't despair. We ain't the same. Some take minutes to understand segment trees, while people like myself took whole month to even get the idea of its working and the logic behind its creation and its usage in problems.
Just keep hustling. I know you'll get there
BTW, did I tell you, I ain't a CS major. I'm studying Physics. Which I hate with all my passion. Maybe you might consider reading the only comment, which is mine btw, on this video.
ruclips.net/video/6MPP_MqS0WA/видео.html
The most awesome explanation I have ever heard! Thank you, Dojo
You made it complicated.
A simple solution is just sort the array and use 2 points one for 1st and one for 2nd and find the sum and store the difference and number.
Now increment the pointer which have small value.
Repeat the process if the current diff is less our diff update the diff and result numbers.
That's it. No need to do extra calculations.
You could have solved the problem with the second method in O(nlogn) time if you used a BBST, and binary search for floor and ceiling values. Since we don't actually need the index of the values, we can use a TreeSet in Java (red-black tree) to implement this idea without it being too bulky. This saves from all the ArrayList sorting shenanigans.
If you like Html & css webdesigning
ruclips.net/video/k6gUDrSkRQE/видео.html
ruclips.net/video/PqrZrvaObZk/видео.html
Support & comment on my video
that uses extra space tho, but with 2 pointers you can solve it with constant space
This amazing video steps got me a new amazing job. You are great! Thank you for the awesome content you are generating!
If you like Html & css webdesigning
ruclips.net/video/o_guxLAoLYY/видео.html
ruclips.net/video/PqrZrvaObZk/видео.html
Support & comment on my video
I think you can also achieve a nlogn by sorting, and do a modified binary search for the exact or the closest value.
yes, oh maybe you can sort the two arrays, mix them and perform binary search
Don't skip math and algebra in school kids.
Now he tells me....
This feels like minesweeper
Edit :now I'm majoring in cs
More than 90% of newcomers to programming learn from tutorials on youtube or other similar platforms like Coursera... However, most of us when finally feeling confident about our skills in whatever programming languages when choose to learn and try to start our own project, we get these overwhelming ideas, functions, functionalities, and components that we're positive we're gonna need in our project but somehow we lack the skill of gathering all of that and putting it together for a final product. We DON'T EVEN KNOW WHERE TO START though we feel like we understand everything.
I took some time and put some effort into researching this and I figured it out!!
WE LEARNED IT THE WRONG WAY!!!
I've talked about it briefly in my first video in a long tutorial series here on youtube:
ruclips.net/video/oVd3NivzZx8/видео.html
YK, Thinking out loud. In an interview setting, it would be intimidating to approach a problem, thinking out loud. It takes experience and practice to master the technique. Your explanation is so valuable, learning how to think and what questions to ask.
Here’s an idea. Take leetcode site, there are over 700 problems, you solve one problem a day, show how you’d approach the problem, how to think, what questions to ask, how to optimize... I think that’s so valuable. Who knows by the end of the year, you can just gather up all those videos and create another Udemy class, I think lots of people would appreciate that
If you like Html & css webdesigning
ruclips.net/video/k6gUDrSkRQE/видео.html
ruclips.net/video/PqrZrvaObZk/видео.html
Support & comment on my video
I ask the interviewer should Iook for a better solution?
Time complexity required to find a solution that pleases the interviewer is O(n²)
😂😂
I'm not even into coding but I just like you 😘
Great buildup from vague idea of how to do it, to coming up with a way to traverse that grid. Really cool.
Your tip 2 is also requires to iterate through a Set. Doesn't that make it the same complexity as tip 1?
No with tip 2 he only iterates through the list. So he takes the first element of the list and sees what sum makes 24. Let’s call that number x. So x+ first element= 24. U can use set properties to search if x exists in the set. And if it does u have found the answer. Which is O(n) Bc u iterate through each element to see if there is a x that makes this true. . Hope this makes sense.
As an interviewer the follow up would be: what if the numbers are repeating
it’s useful when you practice solving algorithm questions, but I doubt how much time you have to go though this kind of thinking process in actual interviews, especially considering that interviewers usually have prepared two questions to ask. So practice is still the key. Don’t ever try to rely on the introduced techs to hack an interview
This is not true for interviews I have conducted. I would rather get a candidate who thinks through a problem and reaches even a brute force solution to something they have never seen before. Sometimes you can tell that a person has practiced the question and simply memorized a solution. Gaps will start to show when you ask questions on basic concepts on their target language that are not related to solving problems. Practice is important but most professionals don't have the time to sit and run thorough programming problems in their free time. But you will have amassed many skills just from working full time and will feel more conformable showing up to interviews without doing the overnight cram sessions I used to do in college. And yes, there will be times when you can't answer the question. I have been there multiple times (in fact, with problems similar to the one in the video). And that's okay, that just means you're not a right fit for the job. I also don't want to reach a position in which I realize later on that I don't have the technical knowledge to contribute well. It doesn't feel good.
Just sort both arrays, from one array pick one element at time and using binary search, search for a other half of number in another array. You can write 2 binary search for this, one should return the exact number or the closest number higher than it and second one should return the exact or closest one smaller than it. Actually you only need to sort the one where binary search is being done. Overall complexity is O(nlgn).
In the end what you came up with is very similar with this, and visualizing problems and solutions is a very important skill.
Anyway, I wrote this, just to show that there are some other ways to think about such problems.
The process of sorting arrays requires time ans space as well.
he states at the end the time complexity is O(nlogn) and space complexity O(n) "assuming that you use an nlogn sorting algorithm" i.e. mergesort or quicksort or te like. mergesort is of time complex O(nlogn) and space O(n) thus we can assume he is accounting for this correctly.
Code in python bro
Just off the top of my head (and I didn't actually implement this solution, so it may not work).
Sort both of the arrays, and have a pointer point to the first element of the first array, and another pointer point to the last element of the second array.
Grab the sum of the two elements holding the pointer, if the sum is too low move the pointer on the first array to the next element, if the sum is too high, then move the pointer on the second array to the previous element.
Continue this procedure until the pointers criss-cross, i.e. while(secondP > firstP), MEANWHILE keep track of the lowest difference so far between the sums and the target, and their indexes.
You are brilliant.
Thanks for the knowledge sharing and innovating problem solving skills
I have a question for the code attached to the video: why is it 'j = len(a2_sorted) - 1' instead of 'j = len(a2_sorted)' in the nineth line?
Hi... Good video. Like always
I think the time complexity 3 is more than 2 method.
Isn't it?
Can't we sort only one of the arrays and then perform binary search on the array for each element of the other array. Time complexity still remains o(nlogn).
greedy thinking! it's my solution too.
Nice idea, but I think in this case the visualization makes the problem a bit more complex than it is. I came up with simple short solution (C++) with runtime complexity O(n log(n)) and space complexity O(n). In my solution arrays can have different sizes.
pair sumUpToTarget( vector &v1, vector &v2, int target)
{
if ((v1.size() < 1) || (v2.size() < 1))
return {};
set s(v2.begin(), v2.end());
pair p = { {*v1.begin(), *s.begin()}, std::abs(target - (*v1.begin() + *s.begin())) };
for (auto elm : v1) {
auto pos = s.lower_bound(target - elm);
if (pos == s.end())
pos = prev(s.end());
if (abs(target - (elm + *pos)) < p.second)
p = { {elm, *pos}, abs(target - (elm + *pos)) };
}
return p.first;
}
Good luck coming up with these insights first time during a real interview rofl.
just had this on my interview... of course I only came up with brute force solution. :)
You don't really need those exact insights to solve the problem, it's just that visualization can help. He was basically brute force getting the insights. You could have just as easily found the answer by just thinking how if you sort the data you could eliminate a lot of checks, because if you know that array1[i] + array2[j] > target then there is no point checking the value of array1[i] + array2[j+1].
When you sort the data and look at it {1, 4, 7, 10} {4, 5, 7, 8} with target 13 you could notice that 10 + 4 is bigger than 13 so there is no point checking any other number with 10 so we move to 7. 7+4 is less so move forward. 7+5 less so keep going. 7+7 >13 so no point going forward. Move to 4 and etc until you need to move left or right, but there are no more elements. Then just return the answer you hopefully kept track of during the looping.
If you were given two arrays of 1000 where O(n^2) is crap it MIGHT push you in the spatial solution direction but good luck implementing that on a whiteboard lol on hour three of an interview loop
Do some of the problems and get through a few interviews and you'll be able to find it. This problem really isn't so complex.
@@zubich did u get the job lol
Thank you! The udemy videos are perfect. It's the only udemy course I actually used.... sadly, such ambition, yet no follow-through. I have a very important interview coming and I feel nervous bc I'm competing with people who were formally trained in CS. I just learned everything on the fly with no structure. I didn't even know there were algorithm types or what a binary search was. I would do them all the time bc I have a physics background so I understand optimation but I never learned the names. After seeing the different types of algorithms I feel like I'm starting with some sort of foundational idea of how to do it. I thought binary search meant you put a ) if it's not what you were looking for and a 1 if it was. lol
Thank you for clear and concise explanation with visuals. Keep up the good work
If you like Html & css webdesigning
ruclips.net/video/o_guxLAoLYY/видео.html
ruclips.net/video/PqrZrvaObZk/видео.html
Support & comment on my video
Wait, at 4:08 this is still O of n squared, since you have to check each number from the first array for every number in the second array. And you can't search the first array in less then O(n), so that's still gonna be O(n^2)
I agree. I believe that with any solution on an unsorted array(s), you will still need to check every element. With them being of the same fixed size, that's n^2.
This also caused me to pause and then I did a kind of deep dive on this problem.
Even with the elements sorted, a brute force algorithm is n^2.
Consider::
desired: 70
array1: [50, 51, 52, 53, 54]
array2: [20, 21, 22, 23, 24]
Where we start at the end, or consider a desired number of 78 or greater where we start from the beginning, brute force is o(n^2).
So we would like better if possible.
Other people have posted the same solution, but typing it out helps me :-)
I do like to start with how sorting the data will help me. My thought was with the arrays sorted you can now iterate array1's elements(length 'n' times), then binary search array2 for a number that adds up to our desired number, storing off the closest sum as we go. The binary search on the sorted array2 reduces all array1 operations (n) on array2 to log(n). So we have nlogn.
A bit irrelevant to time complexity but now that I think about it, only one array needs to be sorted.
Brute force solution be like: dispair, dispair, dispair. Very useful tips! Thank you!
If you like Html & css webdesigning
ruclips.net/video/k6gUDrSkRQE/видео.html
ruclips.net/video/PqrZrvaObZk/видео.html
Support & comment on my video
0:20 What is the name of that presentation tool?
I love you bro u changed my life
What exactly is the set? Is that a hash table? Does that mean it’s constant time lookup? I’m just confusion on how the runtime is linear for the 2nd version of how to solve the first problem.
The reason for putting the data in a set is because a set is build on a hash table, than you can accessed data via a hash table. The look up on a hash table is quicker than looking up on a array. For an array you have to iterate over the list while a hash table will be able to find the item you are looking for right away if it exist. In python at least this is how set are built. you will still need to iterate over one of the arrays so I think that makes the solution is linear I believe.
For your second approach, you will require O(nlogn) . Suppose you have two arrays A and B with n elements and for each element in B you perform binary search O(logn) searching for the other number of pair in A. For n elements in B, the complexity would be O(nlogn)
Actullauy, i was looking comments to see if someone has mentioned this.e second approach will not take O(n) time, it will take more time.
But we can't use binary search since the two arrays are unsorted. So the second approach also would yield O(n^2) time complexity(Please correct me if wrong)
@@vedantsharma5876 You can first sort both arrays in O(nlogn), then do the binary seach for each element in B, which would be also O(nlog), since each binary seach takes O(logn). So, in the end, it should be O(nlogn).
If the arrays were already sorted, doing the algorithm from the video would only take O(n), while binary seach for each element in B would still take O(nlogn).
@@rickvstange oh yes. Thanks Ricardo!
i would say its O(n*m) where m is the value of the target. You can think about corner case - where target is huge, like 1000000000, and all the numbers are small - 0, 1, 2 etc. You would have to iterate 1000000000 times to get the answer
What is the tool you used for creating this video YK? I think it is amazing for teaching by this way.
Yes. It's good. Looking for the same
He's a coder, maybe he wrote it himself
@@philcooper2408 :o woa
My solution to this would be to sort just 1 of the arrays. Then for each node of the 2nd array do a binary search into the first array of the 'compliment' (aka the number that would directly equal the target). This should get the closest to that. From there you just keep track of the minimum absolute value of the different from the target. This is also an O(nlgn) solution.
I really really enjoy your videos, problem solving the most, i hope your channel covers more about problem solving and competitive programming
If you like Html & css webdesigning
ruclips.net/video/o_guxLAoLYY/видео.html
ruclips.net/video/PqrZrvaObZk/видео.html
Support & comment on my video
@17:39
Interviewer: Am I a joke to you?
Love watching these even if this isnt my major.
Eddie Cho come to the computer science side, let the algorithms flow through you
If you like Html & css webdesigning
ruclips.net/video/k6gUDrSkRQE/видео.html
ruclips.net/video/PqrZrvaObZk/видео.html
Support & comment on my video
Can u please give example to type fast
I would just buy another CPU
You can subscribe to this channel to stay updated with latest programming videos ruclips.net/channel/UC33VKuS1b-JmGh-Zi-oWOyA
Is searching in set O(1) time?
I was struggling to find something like this, thank you
If you like Html & css webdesigning
ruclips.net/video/o_guxLAoLYY/видео.html
ruclips.net/video/PqrZrvaObZk/видео.html
plz subscribe & comment on my video
Good sales pitch. A simple solution: You could associate each value to each bit of an integer (12 bits, 6 MSB is first array, 6 LSB is second array) in total going from 0 to 2^12=4096. Add all the values associated with a 1. This would go through all the combinations at light speed.
5:30 think every language dont have function to check particular element is there in the list or not . In c it is not there so u have to compare against each element. its o(n2) .
8:45 if sort first then u have to consider the sort complexity o(n2) or log n
right, he is not a programmer at all, checking in the set also takes n actions but he talks about it as if it was nothing. He said other silly things, too. If I am wrong, explain to me, I'll accept it.
ArtCool Live it’s because the set is a hash table, so lookups are constant O(1).
If it's hash table, you still have to go over every item which makes it O(n)
Here is the solution I came up with with pyhton:
def sum(x,number):
start = 0
end = len(x) - 1
base = x[start] + x[end]
while start < end and end > start:
if base == number:
print (str(x[start]) + " and " + str(x[end]) + " adds up to " + str(number) )
return True
elif base > number:
end -= 1
base = x[start] + x[end]
elif base < number:
start += 1
base = x[start] + x[end]
else:
print ("No pair in these set of numbers add up to "+ str(number) )
return False
The problem usually for me is not coming up with the solution, but actually coding the solution after coming up with it, because I suck at coding.
A very nice explanation of the thinking process. But another trait of a good programmer is knowing his data structures. Java, for example, has TreeSet with an O(logn) time complexity for ceiling/floor methods. And that would come straight from your "simpler version of the problem". So, for those of us not so bright as to come up with the original solution, it's worth to learn the proper tools our languages provide.
You just wrote a DDA 👍👏
what is a dda
This can be done in O(n) time. Go over first list, create a new list with required numbers to reach target, go over second list and compute Math.abs() to determine how close you are to target. If you are closer than previous one, mark this tuple in a list. After iterration of second list you know exactly how close you can get with how many tuples. No need to sort, if you sort you get in to n*logn order. Only problem in this solution is you need O(n) memory.
he didnt explain very well...
yours is better
Learnt so so much in this video alone. Thanks a lot for this high-quality content ❤
If you like Html & css webdesigning
ruclips.net/video/k6gUDrSkRQE/видео.html
ruclips.net/video/PqrZrvaObZk/видео.html
Support & comment on my video
Second solution - is not o(n).
Can you please explain the “O”
It refers to the complexity of the operation. That is, in terms of the size of the input how much effort or memory would it take to run. For example, the first naive solution of comparing every pair of values from each array had a time complexity of O(n^2). In this case, for input arrays of size n, there are n^2 possible pairs of values that take one number from each array. For each possible pair, we would need to calculate the sum of the pair, and compare that value to the target. This matters because it means that as the size of the input arrays grows, the number of pairs, and so the number of operations we must perform grows faster than n. Imagine comparing two arrays of size 4, like in the video; even a brute force solution would only require us to calculate 16 pairs, not too bad. Instead imagine that the arrays had 20,000 elements each. Now our brute force solution requires us to calculate 400,000,000 pairs, and it will only get worse as the input arrays get larger.
To try to sum it up, the complexity of an algorithm refers to how the amount of work required to run the algorithm scales with the size of the input.
Time complexity generally refers to the number of distinct operations we must perform.
Space complexity generally refers to the amount of memory needed to store information.
For example, if the final solution in the video, the matrix used to visualize the problem was of size n by n, and had n^2 cells. So if we tried to compute and store the value of every cell, the time complexity would be O(n^2) (number of pairs we have to check) and the space complexity would also be O(n^2) (we need a matrix with n^2 cells). This means that in the implementation of this algorithm, it would be important that we not actually create such a matrix, but instead find a different way to store any information we need, and only compute the values in the cells we check starting from the top right.
There is more to it, specifically that there are actually two notations here "O" (big-O notation) and "o" (little-o notation). Both of these notations deal with the upper bound, or worst-case scenario of the problem. Big-O notation is what people generally focus on, and so it if someone says just "O" or "O notation" they usually mean big-O.
(There are also Omega notations that deal with the lower bound, but that usually isn't as important)
If you are curious, this page will probably be more thorough than I was here: en.wikipedia.org/wiki/Time_complexity
Samuel Elliott wow thanks, now I understand
In case anybody else reads this in the future, the easy way to think about it is: for 2 arrays of 5 elements each, brute force = 5**2, final solution is (5 * log(5)) * 3 (2 sorts, then the final search), where the brute force comes out to 25 and the final solution comes out to 24.1416, which is pretty close, right?
Then if you look at 2 arrays of length 1000, you'd have 1000**2, which is 1,000,000, while the final solution would be (1000 * log(1000)) * 3, which is 6907.7553, far less than a million.
It only gets worse from there. ;)
For the second algorithm (with a set) the complexity will be O(x*n*log(n)). First, you need to fill the set. Inserting an element in a set is O(log(n)), inserting (n) elements in a set is O(n*log(n)). Second, Because you iterate through the array (O(n)) and you search in a set (O(log(n))) for each element in the array (which gives us O(n*log(n))). And you do it (x) times.
PS: On the other hand if you use hash table for the set search for an element should be O(1) (if there is no hash collisions). So, probably be yes. It could be O(x*n).
18:07 check out projecteuler. It has 645 problems that can be solved and once you solve a problem you can see how others solved it. IT IS FREE!
Id definitely try it :) i lvoe to increase my problem solving as well ;)
After creating the table, better way to find the closest no is binary search in my opinion. Because in your examples the closest no was always somewhere in the middle, but what if it is the last box, binary search will do wonders in this case.
This was on my algorithms exam last semester 😱😱 and we had to prove that ours was most optimal
Another great way to solve this problem:
1. Sort 1 of the arrays O(n*log(n))
2. Make that sorted array into a balanced binary tree (this is O(n) complex)
3. for each elem in the other array:
DFS search binary tree for closest pair
4. DFS method is defined as follows
if bTree.root + elem is closer to the target than the current best pair then (bTree.root, elem) becomes the best pair
if bTree.root + elem < target && bTree.right.nonEmpty then DFS(right)
else if bTree.root + elem > target && bTree.left.nonEmpty then DFS(left)
The DFS lookup for the closest pair for a single element takes O(log(n)) time. Repeating for each element in the second array gives us O(n*log(n)) time
Exactly. This is much better, because if arrays have different length, you can only sort the small one. And even if they are the same, real world complexity is better.
U are the one who inspired me to learn programming ❤
Thanks bro 👍
This is brilliant. Thank you so much for this. Easily the most detailed and clearly articulated and digestible, general approach to solving algorithm problems.. While watching youtube videos on algorithms, so many of the people solving them just seemingly grab the solution out of thin air. No doubt this is due to them doing a ton of algorithms in the past, recognizing patterns and reaching for solutions or similar solutions that worked for them in the past. The problem with this for people newer to solving algorithms is as I mentioned - this seems to come out of thin air and doesn't help, because it doesn't show the thought process behind what came up with the original pattern that they're grabbing for.. hope that makes sense and thanks again for the video.
solution pattern looks like A* Algorithm (:
Hey YK,
I dream of working in big tech companies like Google, Facebook, Apple & Microsoft. So I used to search this topic in Google. Then I watch your "How I Got a Job at Google as a Software Engineer (without a Computer Science Degree!)" video. And from this I got the hope then I started learning proagramming. But on Google's Career site I saw that every Software Engineer job Computer Science degree must be. But I don't have the Computer Science degree. What can I do now ? Do I expect to get jobs in these tech companies ?
Please help me by answering this question.
Thank You in Advance.
Another great video - helpful and practical!
(久しぶり!)
I wouldn't think that companies would be interviewing. It seems to me that candidates are doing the interview now. What benefits do you offer? Why is this company hiring now?😂
Google Interviewer : We have a problem try to find an answer.
Me : First solution ...DoS attack Sir. This will make sure the problem doesn't exist at all. I hate problems. You know ?
Sir what's your salary per month when you worked in google?
It would be somewhere near crore
Alternate approach:
Input: array A, B; int sum
1. Sort array B
2. For every element of array A binary search closest element (sum - A[i]) in array B
That was what I thought of immediately, but on second thought, his solution might be better. For just one solution both solutions would have a time complexity of n*log n, and binary search might actually be faster since sorting requires a lot of memory write. But if you need to use the same two arrays to find multiple targets his solution would have an amortized complexity of n.
Not to mention it looks much more elegant.
for step 2 you don't need to go through all values of a, only until the point a perfect match is found at which point you can terminate early
Doesn’t work, you need also sort array B, and you can’t do binary search because you have two pointers.
And yet, sorting arrays also has complexity, which should be taken into account...
Are you seriously full time youtuber😶
He had worked at Google, bro.
I am so discouraged as I could not wrap my head around this "simple" problem... Not being able to solve this kind of thing drives me crazy as I love to code. But I am horrible at things like this. Help? How can I improve myself in this regard?
I’m prepping for a big interview at one of the big 4 companies (or 5 if counting Microsoft) Thank you for this video!
How'd it go?
Python one row solution
foo =lambda a1,a2, val: sorted(list(map(lambda x: (x[0]+x[1]-val,x),[(x,y) for x in a1 for y in a2])), key= lambda x: abs(x[0]), reverse=False)[1]
run like this: print(foo([-1,3,8,2,9,5],[4,1,2,10,5,20],24))
Love your videos man 💪 you made me fall in love with Python 😂 I'm learning how to code it now 💪
how's it going so far?
This is a solution returns all of the pairs. The differences to CS dojo’s answer is it stores the answers in a dictionary (hashmap) and doesn’t stop if it found a solution that matches the target.
```python
def solution(arr1, arr2, target):
xs = sorted(arr1)
ys = sorted(arr2)
i = len(xs) - 1
j = 0
total = xs[0] + ys[0]
diff = total - target
distances = {}
distances[abs(diff)] = {(xs[0], ys[0])}
smallest_diff = abs(diff)
while i >= 0 and j
5:44 - till the end - the most important part
Im new to programming, my first thought was to get the absolut of target - sum(pair) and compare if i get a smaller one.
#Python
Kinda like:
t = target
#algorithm to find a pair
if abs( target - (pairval_1+pairval_2) ) < t :
t = abs( target - (pairval_1+pairval_2) )
x,y = pairval_1, pairval_2
if t == 0 :
return x,y
return x,y
Coding is html. Programming is making apps and data. Anybody can "code". I "program" That is what we always called it. Been doing this since 1983.
James Ryan no
AGREED
@@aserillll He's right! You can't program HTML because it's not a programming language.