A similar video by Matt Parker (Stand-up Maths) "Someone improved my code by 40,832,277,770%." I was part of the team that optimized his 1 month solution down to 300 microseconds. We submitted ours kind of late so he wasn't able to cover our big algorithmic changes, but many of the techniques you mention here applied there as well.
18:00 no coincidence, the range from 97 to 122 falls between (32*3 = 96) and (32* 4 = 128), meaning the remainders span from 1 to 31 within this interval
Damn, it makes you realize how much thought the original programmers put into making things elegant. And then we ended up with JS, web apis, and frontend frameworks...
@@ismbksMy favorite Ascii Hack is that each capital letter A-Z (65-90) is exactly 32 away from their lowercase counterparts (97-122) in such a way that the only difference is the 6th bit (2^(6-1) = 32) making things like case-insensitive comparisons or conversions to upper or lowercase SUPER fast bitwise operations ignoring, unsetting, or setting the 6th bit respectively.
@@HtotheG hell yeah.. i already knew about this one but it's a really cool hack people should definitely know! apparently cloudflare uses this for fast string filtering so it must be good..
@NeetCodeIO back then programmers were genius. Like you basically needed a phd, or some deep understanding of math. The bar has really lowered. Which I guess was necessary to scale.
To preface, You very nicely explained everything. Even the tricky bits :). Stack is handled by the cpu under the direction of the OS. There is still overhead when you cross the page boundary. The heap is not handled by the OS, but by the whatever allocator you use (the allocator mmap()s pages when needed). Allocators usually use "bins/buckets" for various sizes of allocations, so it's pretty fast. Unless the allocator has to mmap() some more memory. Anyway, what i'm trying to say is that it's complicated. Like if your language would let you, you could even use the stack as a dynamic array. Or you could mmap() a big piece of memory and just use it as a dynamic array, as memory doesn't get allocated until you touch it. Getting the same result as the stack. If the array size is fixed, the compiler could even just reserve a piece of memory from when the program is loaded (like they do for const strings). Cache locality is also a bit.. Cpu's cache memory in "cache lines", that are usually 64 bytes. And yea, if your resize moves the data then all those cache lines get useless. Then again the memcpy puts the new data into cache, so it's not ~that~ bad. Just that something else will get thrown out. And there's more levels of cache, like L1 is closest to the core but way smaller then L3, ofc. And yea, the cpu "prefetch"-es data as you read it in. It even figures out the direction, so going 0..n is the same as n..0. In short you always want to use as little memory as possible, and keep the memory that is accessed at the same time as close together as you can. If you can keep it in registers, like the bitfield solution, then your golden. And you ~might~ want to align/pad to some power of 2 (especially to 16 bytes for SIMD, that you even had to on older cpus). PS Oh, and your solution to subtract 'a' would also be faster then modulo (Modulo is divide, ofc subtract would be faster. Bdw bitwise operations usually take 1/3 of a cpu tick, the fastest operations there are (except maybe mov between registers)).
This ☝️ It is not about stack or heap. These are just simple introductory concepts taught in University, but the majority of the people stop at that and never actually understand how memory works. There is MMU, TLB inside of CPU. There are virtual pages of 4kb that are loaded via page faults. There is .text, .data and .bss section which are neither heap nor stack, but are still loaded into programs address space.
... although modulo (%) 32 is optimized to and (&) 0x1f so is basically the same, and other static modulo's are normally optimized to 2x multiply so not so bad. and bitwise ops aren't 1/3 of a clock, they take a whole one: they just have 3 or 4 alu's that can do them so 3 or 4 can be done each tick if nothing else is happening. also, since icelake the move elimination you refer to has been removed on intel (sadge...) so mov takes same time as other ALU ops now... but zen still does it.
@egor.okhterov I'd assume the devs in the video made the testing string variadic from stdin or a file (so there would be no .text section with the test string) cause otherwise the compiler, after linking, could just optimise the answer, maybe even fill it in😂 making the testing times invalid. What would otherwise be the point of the problem if not automation? You would write the first solution you'd come up with and wait a bit, thinking about how the younger you would waste time optimising the code hahaha But it's nice to finally find/read a more technical yt comment thread.
I am a lead web developer and have never done any leetcode except in university. I recently started leetcode to get into a big name company that pays like 10%-20% more than my current company and videos like this are very eye opening!
Thats quite a bit less than I wouldve thought though. If you like the folks you work with a lot, dont get a new job for anything under 25% more than what you make now Being on a horrid team is the worst. Id much rather work for less money than folks I dislike.
I think you can still get a cache locality boost using an array, because the array’s memory is next to other stack variables. That means the array’s memory is more likely to be in the same cache line as the other stack variables.
What I like about your explanation is you dont “assume” the audience know a thing, you drove into the tiniest detail like what even is an AND operation. Whereas college professor always have that assumption, like oh you guys must already know about stack, heap, memory allocation, let me talk about this scheduling algorithm,…
Async or parallel optimization is also really interesting from a data structure perspective. As a chunk of cash has a specific size, we do want a structure that uses as much of a chunk as the algorithm can handle. This means in a forward sliding window approach we can assign each thread with a starting position and collect the results. Likewise you can often use multithreading to make a O(N^2) into a O(N × (N/threads) ). Which leads to great improvements on specific hardware. But it's hardware specific. Currently i'm working with a controller that only has one RAM but SIMD and MIMD. In that case you would either do the backward sliding window on the CPU, or try to fit the whole algorithm in the MIMD and do a forward brute force.
9:50 another bit of overhead that could've been avoided at that step is reusing and clearing one vec, instead of harassing the allocator for a new one every 14 bytes lol
CPUs know how to add numbers together even floats though. ALUs and FPUs make it so the difference between a right shift and a multiply by 2 isn't a thing anymore.
I enjoyed this. i like the mention of bit mask. using a 32 bit sized buffer, it is just so much faster to deal with bits as an index. This is assuming only lowercase(or only uppercase) Latin characters. (26 letters)
Maybe I misunderstood what you wanted to say in the beginning, but CPUs are totally able to add two numbers together. Does that in the end boil down to binary operations? Yes. But except in some very esoteric CPUs it doesn't run those binary operations, but there is dedicated circuitry to do the addition, in many cases in 1 cycle (e.g. on x86 it's a single uop, as well as on most embedded CPUs)
I feel like a lot of these optimizations actually imply knowledge of the data and is biased towards a hypothetical success or failure case.. but if you know your data, there's a whole world of N possibilities to optimize for a specific use-case..
They do imply knowledge. But quite broad knowledge is usually more than enough. For example in Bitmasking, the letter "A" is encoded in 0x65 in ASCII or 0x41 in UTF-8. We don't need to know the specific value, the information that its one byte per char and we can check for equal or not is already sufficient. Going into SIMD instructions forces you to know that 32bit = 4x8bit so you can check 4 chars in one go. A good optimisation is usually something really really trivial. You might want to look up how DFT or the the inverse Fourier Transform works. That one simple binary trick enabled a shit ton of things with image compression, nuclear detection or GPS beeing only a view applications.
At 14:20 he is relocating the array every-time the windows change but with a well constructed loop it is possible to reuse the same array and toss out the indexes we know can't have been updated and contain values from the previous sub-string. On C# doing this makes this algo 4x faster.
25:11 actually you'd see the p's but who's counting? xD In any case, the reverse iteration to guarantee taking the maximum step size every time was definitely the coolest optimization in my book. The second the Primeagen pointed it out it was like HOLD UP, that's so freakin clever. very cool stuff. Not often do i see an optimization that makes me "teehee" like that.
Cache locality does matter even in small arrays vs vectors. Vectors have more space overhead, as they need to track their size, current location, allocated capacity, and so on. They also cannot be packed with whatever other data is in the current stack frame. So it is harder to get the entire working set to fit in the L1 cache, or even if it nominally fits, it's more likely to have parts of it evicted and have to fetch it from the L2 cache when task switching happens in multitasking operating systems. Taken to the extreme, you have programs like CPUBurn which fit the entire program into just the main CPU registers and stress test the CPU by cycling as fast as possible, never reaching out even to CPU cache. Your point applies more to cache lines, which is where moving data from system memory to the L3 cache happens to bring in the _next_ data you need. The concept is related, but not what matters here.
14:00 I think the biggest is allocation, since it has to make a call to malloc every single iteration of the loop, which means one for almost every characters. I'm guessing that if you were to move the Vec::with_capacity out of the loop and vec.clear() it every time you checked a window, you would get much closer to the performance of the array code
You can assign each character to a prime number, you keep multiplying the result with the next number if the division has modulo different than 0 😮. Mem = int32
Ignoring the constant *when you are learning Big O* is important, so that you dont get distracted, however, when building something, it’s only relevant if you are already at the “simplest form” or smallest big O you can achieve, and then the constant matters.
I stumbled across a super nice run-time optimization video in F#, it's called "F# for Performance-Critical Code, by Matthew Crews" neat stuff in there!
We did it in C# and while it is difficult to understand while simply watching the video it becomes trivial simple once you start putting it on paper. We weren't able to reproduce completely the flow Prime has in Rust, I see he first add last, then check, then only remove left, which means he enters the loop with potentially 13 bits set, we did initialize a first windows covering 14 characters and then as long as 14 bits aren't set we exclude left, include right, then check. I don't think it changes a lot but I'm curious to make the code even smaller than what it currently is. On the next step, which is understanding how I can parallelize this, right now I don't see how.
20:20 I often store stuff as bitset. It's more comfortable than working with arrays IMO. Recently I also turned some struct of boolean flags into a bitset. (Or I rather told some AI to do it for me, since it's pretty repetitive)
no offense at all neetcode, I love neetcode. But I had a ratatouille moment when he started going into a sliding window explanation. I think im traumatized from my last job search.
In a code where the loops...spin and grow BigO whispers...how fast can we go With n squared in sight...We’ll optimize all right And watch as that CPU blazes with new mojo!
17:15 mod functions are far more expensive in terms of clock cycles than subtraction, that feels like it'd matter a lot if we're in the realm of 1,000,000% optimizations
This is just a tweak to Boyer Moore algorithm from 1977, which is also a tweak to the KMP algorithm from 1970. This is all very, very old news. Good algorithms to be sure, but you should have heard about them back in college, not be learning about (and seemingly shocked by) them today. Next week are we going to "discover" the B* tree? BTW, for anyone who wants to read some great tutorials on how to improve code performance, see Bentley's "Writing Efficient Programs". Also Michael Abrash has several books where he discusses code optimization (some are graphics focused, but a lot of the techniques will apply to any code). One more good reference is Hacker's Delight, but it's very low level and tough sledding if you aren't fairly advanced.
The use of the modulo is a bad practice. While here we are doing modulo of a power of two on unsigned ints, which any sane compiler should optimize into an and (or do some wizardry to make it work on signed numbers as well), if these two conditions weren't met, and an actual division was performed to find the modulo, there would be a significant runtime cost. As such, your proposed method of substraction would be faster.
So long as it's a compile-time constant, I'm fairly certain you can mod whatever number you want and it'll come out as a series of shift, sub and imul instructions. Godbolt helpfully told me that `int mod (int a) { return a % 3; }` contains not a single idiv instruction, _at zero optimizations._ At -O3, the function length went from 20 to 11 instructions. Using the 32 bit FNV_prime as the compile-time constant merely changed the constants in Godbolt's output, no other effect (ok, an lea with a multiply in the second operand got changed to an imul, whatever). Now, what it _absolutely will not do_ is take an array of anything, deduce that they are all _absolutely_ compile-time constants, and perform the same optimizations for each index of that array. No, for that to work, you have to declare an array of function pointers, which the compiler will refuse to inline under any circumstances (I shouldn't be surprised by that, but I sort of am). And apparently that optimization is just barely worth it on some CPUs: `int mod (int a) { return a % 7 }` removes the idiv on the general case CPU, but generates 14 lines of Assembly -- unless you specify `-march=znver3`, in which case the idiv comes right back, as I'm assuming it would for most modern architectures. Matter of fact, whatever algorithm they're using seems to get worse the further away you are from a power of 2, where the "nearest" power of 2 is always smaller than the constant. Maybe the guy who came up with the algorithm will generalize it to calculate down from the next higher power of 2 as well, and this piece of advice will become consigned to the dustbin of history. Who knows. Fascinating stuff, either way
@@mage3690can i get that in a tldr pls im dyslexic (please ik us programmers need to read but i long for recognition and programming is the only unique skill i could learn at school to get recognition)
hey, had to stop watching the video halfway through so maybe I missed something, but Im mainly referring to the beginning section of the video where you explain different ways to tackle the problem. When you mention dynamic array are you talking about some sort of higher level data structure that I am not familiar with? Just asking because when I hear dynamic array im thinking of a heap allocated array in C which you manipulate fully on your own with malloc and such. I mallocing some size any slower than just going for a static (stack) array. just to be clear arr[4] vs arr* = malloc(...
dynamic array is basically a vector in cpp or a arraylist in java. js and python only use dynamic arrays. i guess another word for it would be a 'resizable array'
Can anybody explain why we use both binary/ 2 bit and also we use 8bit? I am noob. I am asking about the fundamental thing. Any long form answer will be appreciated. Edit: As I went further into the video, I realized this they are talking about DSA of which I have no idea. But still any easy explanations are welcome. Cheers
any extra duplicates will result in less than 14 bits set. Two duplicates will result in 12 bits set. (12 unique + 2 duplicates cancelling each other) Three duplicates will result in 12 bits set. (11 unique + 3 duplicates giving us 1 unique and 2 cancelled) You want exactly 14 bits in final result
i'm a noob who's only just installed rustc and read only a few pages of the rust book, i have no clue why people approach this problem with a hash set instead of a table or vector
Correct, but he stated that the only thing that matters is if there are 14 distinct characters (14 ‘1’s) If there’s 3 repeated letters, that bit would be 1. But there wouldn’t be enough other ‘1’ bits to total to 14
Apologies for judging that I thought you just talk like a superior human being with the tone but I can see it's just your accent / tone but you are humble and still Human and not AI (Evidence in 26:00). Subbed and will recommend to... I have no friends to recommend this type of stuff send help :( :( :(
LETS GO! I HAVE MADE IT!!!
Hello prim-ye-jun
hello there Captain Prime
my goat
hello little guy
hello little RUclipsr!
bro primeagened primeagen
"from this little youtuber primeOgen"
trying to understand if hes just trolling, he said it so seriously
@@PP-ss3zf what do you think
@@sahilverma_dev my comment.. thats what i think
@@PP-ss3zf think some more
@@PP-ss3zf he's obviously trolling, he knows him
A similar video by Matt Parker (Stand-up Maths) "Someone improved my code by 40,832,277,770%." I was part of the team that optimized his 1 month solution down to 300 microseconds. We submitted ours kind of late so he wasn't able to cover our big algorithmic changes, but many of the techniques you mention here applied there as well.
18:00 no coincidence, the range from 97 to 122 falls between (32*3 = 96) and (32* 4 = 128), meaning the remainders span from 1 to 31 within this interval
Damn, it makes you realize how much thought the original programmers put into making things elegant. And then we ended up with JS, web apis, and frontend frameworks...
i wonder how many more ascii tricks there are, it's not very well documented, or just hard to find
@@ismbksMy favorite Ascii Hack is that each capital letter A-Z (65-90) is exactly 32 away from their lowercase counterparts (97-122) in such a way that the only difference is the 6th bit (2^(6-1) = 32) making things like case-insensitive comparisons or conversions to upper or lowercase SUPER fast bitwise operations ignoring, unsetting, or setting the 6th bit respectively.
@@HtotheG hell yeah.. i already knew about this one but it's a really cool hack people should definitely know! apparently cloudflare uses this for fast string filtering so it must be good..
@NeetCodeIO back then programmers were genius. Like you basically needed a phd, or some deep understanding of math.
The bar has really lowered. Which I guess was necessary to scale.
To preface, You very nicely explained everything. Even the tricky bits :).
Stack is handled by the cpu under the direction of the OS. There is still overhead when you cross the page boundary. The heap is not handled by the OS, but by the whatever allocator you use (the allocator mmap()s pages when needed). Allocators usually use "bins/buckets" for various sizes of allocations, so it's pretty fast. Unless the allocator has to mmap() some more memory.
Anyway, what i'm trying to say is that it's complicated. Like if your language would let you, you could even use the stack as a dynamic array. Or you could mmap() a big piece of memory and just use it as a dynamic array, as memory doesn't get allocated until you touch it. Getting the same result as the stack. If the array size is fixed, the compiler could even just reserve a piece of memory from when the program is loaded (like they do for const strings).
Cache locality is also a bit.. Cpu's cache memory in "cache lines", that are usually 64 bytes. And yea, if your resize moves the data then all those cache lines get useless. Then again the memcpy puts the new data into cache, so it's not ~that~ bad. Just that something else will get thrown out. And there's more levels of cache, like L1 is closest to the core but way smaller then L3, ofc.
And yea, the cpu "prefetch"-es data as you read it in. It even figures out the direction, so going 0..n is the same as n..0.
In short you always want to use as little memory as possible, and keep the memory that is accessed at the same time as close together as you can. If you can keep it in registers, like the bitfield solution, then your golden. And you ~might~ want to align/pad to some power of 2 (especially to 16 bytes for SIMD, that you even had to on older cpus).
PS Oh, and your solution to subtract 'a' would also be faster then modulo (Modulo is divide, ofc subtract would be faster. Bdw bitwise operations usually take 1/3 of a cpu tick, the fastest operations there are (except maybe mov between registers)).
This ☝️
It is not about stack or heap.
These are just simple introductory concepts taught in University, but the majority of the people stop at that and never actually understand how memory works.
There is MMU, TLB inside of CPU.
There are virtual pages of 4kb that are loaded via page faults.
There is .text, .data and .bss section which are neither heap nor stack, but are still loaded into programs address space.
... although modulo (%) 32 is optimized to and (&) 0x1f so is basically the same, and other static modulo's are normally optimized to 2x multiply so not so bad. and bitwise ops aren't 1/3 of a clock, they take a whole one: they just have 3 or 4 alu's that can do them so 3 or 4 can be done each tick if nothing else is happening. also, since icelake the move elimination you refer to has been removed on intel (sadge...) so mov takes same time as other ALU ops now... but zen still does it.
@egor.okhterov I'd assume the devs in the video made the testing string variadic from stdin or a file (so there would be no .text section with the test string) cause otherwise the compiler, after linking, could just optimise the answer, maybe even fill it in😂 making the testing times invalid.
What would otherwise be the point of the problem if not automation? You would write the first solution you'd come up with and wait a bit, thinking about how the younger you would waste time optimising the code hahaha
But it's nice to finally find/read a more technical yt comment thread.
I really wish I could share RUclips comments.
Bro gave 2hrs length content for him
this little youtuber 🤣🤣
he should stream as well
@@plaintext7288 fr fr
@@plaintext7288 agree
Primyejen 🤣🤣🤣
That's classic
That is a BRILLIANT video,loved watching it.
I'm honestly glad there are people out there that enjoy this stuff as much as I do. Love deep technical concepts.
These random topic videos have been really insightful, great content!
I am a lead web developer and have never done any leetcode except in university. I recently started leetcode to get into a big name company that pays like 10%-20% more than my current company and videos like this are very eye opening!
Thats quite a bit less than I wouldve thought though. If you like the folks you work with a lot, dont get a new job for anything under 25% more than what you make now
Being on a horrid team is the worst. Id much rather work for less money than folks I dislike.
I think you can still get a cache locality boost using an array, because the array’s memory is next to other stack variables. That means the array’s memory is more likely to be in the same cache line as the other stack variables.
You only need two pointers referencing input, what array?
Pointers are pointing to underlying array which has to be accessed (through the pointers)
What I like about your explanation is you dont “assume” the audience know a thing, you drove into the tiniest detail like what even is an AND operation. Whereas college professor always have that assumption, like oh you guys must already know about stack, heap, memory allocation, let me talk about this scheduling algorithm,…
Because most professors suck and only “teach” for a paycheck
Feels like Boyer-Moore, but without the pain of preprocessing bad-character/good-suffix tables. Very nice.
0:15 "little youtuber" 😂😂
I had to do a double take just to make sure... 😂
I double checked. And still didn't believe it first and then I turned the captions on just to be sure.
You can create static arrays in Python with the "array" module. Still not sure if that qualifies it as a "real" language, though.
Async or parallel optimization is also really interesting from a data structure perspective. As a chunk of cash has a specific size, we do want a structure that uses as much of a chunk as the algorithm can handle.
This means in a forward sliding window approach we can assign each thread with a starting position and collect the results. Likewise you can often use multithreading to make a O(N^2) into a O(N × (N/threads) ). Which leads to great improvements on specific hardware.
But it's hardware specific. Currently i'm working with a controller that only has one RAM but SIMD and MIMD. In that case you would either do the backward sliding window on the CPU, or try to fit the whole algorithm in the MIMD and do a forward brute force.
9:50 another bit of overhead that could've been avoided at that step is reusing and clearing one vec, instead of harassing the allocator for a new one every 14 bytes lol
"the actual runtime is what matters" - tell that to the average react developer
Lol lol lol, they're too bothered about elgant abstraction while their apps keep making people replace their phones every two years
I died at "we talked a bit about memory" 😂
CPUs know how to add numbers together even floats though. ALUs and FPUs make it so the difference between a right shift and a multiply by 2 isn't a thing anymore.
I enjoyed this. i like the mention of bit mask. using a 32 bit sized buffer, it is just so much faster to deal with bits as an index. This is assuming only lowercase(or only uppercase) Latin characters. (26 letters)
Maybe I misunderstood what you wanted to say in the beginning, but CPUs are totally able to add two numbers together.
Does that in the end boil down to binary operations? Yes. But except in some very esoteric CPUs it doesn't run those binary operations, but there is dedicated circuitry to do the addition, in many cases in 1 cycle (e.g. on x86 it's a single uop, as well as on most embedded CPUs)
I feel like a lot of these optimizations actually imply knowledge of the data and is biased towards a hypothetical success or failure case.. but if you know your data, there's a whole world of N possibilities to optimize for a specific use-case..
They do imply knowledge. But quite broad knowledge is usually more than enough. For example in Bitmasking, the letter "A" is encoded in 0x65 in ASCII or 0x41 in UTF-8. We don't need to know the specific value, the information that its one byte per char and we can check for equal or not is already sufficient. Going into SIMD instructions forces you to know that 32bit = 4x8bit so you can check 4 chars in one go.
A good optimisation is usually something really really trivial. You might want to look up how DFT or the the inverse Fourier Transform works. That one simple binary trick enabled a shit ton of things with image compression, nuclear detection or GPS beeing only a view applications.
At 14:20 he is relocating the array every-time the windows change but with a well constructed loop it is possible to reuse the same array and toss out the indexes we know can't have been updated and contain values from the previous sub-string. On C# doing this makes this algo 4x faster.
25:11 actually you'd see the p's but who's counting? xD
In any case, the reverse iteration to guarantee taking the maximum step size every time was definitely the coolest optimization in my book. The second the Primeagen pointed it out it was like HOLD UP, that's so freakin clever. very cool stuff. Not often do i see an optimization that makes me "teehee" like that.
Cache locality does matter even in small arrays vs vectors. Vectors have more space overhead, as they need to track their size, current location, allocated capacity, and so on. They also cannot be packed with whatever other data is in the current stack frame. So it is harder to get the entire working set to fit in the L1 cache, or even if it nominally fits, it's more likely to have parts of it evicted and have to fetch it from the L2 cache when task switching happens in multitasking operating systems. Taken to the extreme, you have programs like CPUBurn which fit the entire program into just the main CPU registers and stress test the CPU by cycling as fast as possible, never reaching out even to CPU cache.
Your point applies more to cache lines, which is where moving data from system memory to the L3 cache happens to bring in the _next_ data you need. The concept is related, but not what matters here.
I am Fan of Prime but you cleared the stuff easily, He is too good and sometimes forget we don't understand his language
You're right about cache locality not being involved. It's the same thing with strings and small string optimizations
the right to left approach is often a good idea when looking for largest sub array
14:00 I think the biggest is allocation, since it has to make a call to malloc every single iteration of the loop, which means one for almost every characters. I'm guessing that if you were to move the Vec::with_capacity out of the loop and vec.clear() it every time you checked a window, you would get much closer to the performance of the array code
25:56 brother you are the "super leetcode monkey"
This screenshot reminds of Paolo Costa’s Secret Juice ad.
Just started my cs degree but I love every part of this and can’t wait to get to this level
You can assign each character to a prime number, you keep multiplying the result with the next number if the division has modulo different than 0 😮.
Mem = int32
Okay this is one of the great video I have seen in a while.
Ignoring the constant *when you are learning Big O* is important, so that you dont get distracted, however, when building something, it’s only relevant if you are already at the “simplest form” or smallest big O you can achieve, and then the constant matters.
I stumbled across a super nice run-time optimization video in F#, it's called "F# for Performance-Critical Code, by Matthew Crews" neat stuff in there!
This was a great video 👏👏, enjoyed watching it, I wish youtube suggests me more videos of this type 😅
Thanks for your nice and insightful explanation!
Dhanyavad bhai
Once you get down to this level just write it in assembly. It's quite fun and simple
We did it in C# and while it is difficult to understand while simply watching the video it becomes trivial simple once you start putting it on paper. We weren't able to reproduce completely the flow Prime has in Rust, I see he first add last, then check, then only remove left, which means he enters the loop with potentially 13 bits set, we did initialize a first windows covering 14 characters and then as long as 14 bits aren't set we exclude left, include right, then check. I don't think it changes a lot but I'm curious to make the code even smaller than what it currently is.
On the next step, which is understanding how I can parallelize this, right now I don't see how.
20:20 I often store stuff as bitset. It's more comfortable than working with arrays IMO.
Recently I also turned some struct of boolean flags into a bitset. (Or I rather told some AI to do it for me, since it's pretty repetitive)
‘Prim-ye-jun’ that made me laugh harder
17:30 I also always subtract 'a' instead of modding sizeof(T).
Neetcode: *walks into a room*
Inefficient algorithm: Why do I hear boss music?
I love your explaining for hard thing video in simple english. keep going u are doing good thing.👌
Yeah, I just started learning DSA, and I see this. Wow, I'm cooked.
No one mentions the work of previous coders, giving due credits is a sign of not being a talking galfon. The skipping "optimization" is Boyer-Moore.
"This little amoeba youtuber" -> Next Video, "This electron youtuber"
Was not expecting to see python in the thumbnail of a video with a title containing “faster”
Fixed window width is crazy. Why not track start and end separately? There would be no nested iterations, no tracking of multiple symbols.
no offense at all neetcode, I love neetcode. But I had a ratatouille moment when he started going into a sliding window explanation. I think im traumatized from my last job search.
24:20 I started to hate if-let. In this case I'd use let-else, especially because the else case of let-else has to return (or continue/break) anyway.
Primy-agen
In a code where the loops...spin and grow
BigO whispers...how fast can we go
With n squared in sight...We’ll optimize all right
And watch as that CPU blazes with new mojo!
27:00 I agree the intuition made so much more sense to me
17:15 mod functions are far more expensive in terms of clock cycles than subtraction, that feels like it'd matter a lot if we're in the realm of 1,000,000% optimizations
%32 will get burned out by the compiler. If it wasn't a power of 2 then yes, it'd be vastly slower than the character - "a" approach
the name is primagoon
This is just a tweak to Boyer Moore algorithm from 1977, which is also a tweak to the KMP algorithm from 1970. This is all very, very old news. Good algorithms to be sure, but you should have heard about them back in college, not be learning about (and seemingly shocked by) them today. Next week are we going to "discover" the B* tree? BTW, for anyone who wants to read some great tutorials on how to improve code performance, see Bentley's "Writing Efficient Programs". Also Michael Abrash has several books where he discusses code optimization (some are graphics focused, but a lot of the techniques will apply to any code). One more good reference is Hacker's Delight, but it's very low level and tough sledding if you aren't fairly advanced.
The use of the modulo is a bad practice. While here we are doing modulo of a power of two on unsigned ints, which any sane compiler should optimize into an and (or do some wizardry to make it work on signed numbers as well), if these two conditions weren't met, and an actual division was performed to find the modulo, there would be a significant runtime cost. As such, your proposed method of substraction would be faster.
So long as it's a compile-time constant, I'm fairly certain you can mod whatever number you want and it'll come out as a series of shift, sub and imul instructions. Godbolt helpfully told me that `int mod (int a) { return a % 3; }` contains not a single idiv instruction, _at zero optimizations._ At -O3, the function length went from 20 to 11 instructions. Using the 32 bit FNV_prime as the compile-time constant merely changed the constants in Godbolt's output, no other effect (ok, an lea with a multiply in the second operand got changed to an imul, whatever).
Now, what it _absolutely will not do_ is take an array of anything, deduce that they are all _absolutely_ compile-time constants, and perform the same optimizations for each index of that array. No, for that to work, you have to declare an array of function pointers, which the compiler will refuse to inline under any circumstances (I shouldn't be surprised by that, but I sort of am).
And apparently that optimization is just barely worth it on some CPUs: `int mod (int a) { return a % 7 }` removes the idiv on the general case CPU, but generates 14 lines of Assembly -- unless you specify `-march=znver3`, in which case the idiv comes right back, as I'm assuming it would for most modern architectures.
Matter of fact, whatever algorithm they're using seems to get worse the further away you are from a power of 2, where the "nearest" power of 2 is always smaller than the constant. Maybe the guy who came up with the algorithm will generalize it to calculate down from the next higher power of 2 as well, and this piece of advice will become consigned to the dustbin of history. Who knows. Fascinating stuff, either way
Omg my brain was like wha whahuh huùuh ohhhh yeah i get it u sent me on a roller coaster ride bro
@@mage3690can i get that in a tldr pls im dyslexic (please ik us programmers need to read but i long for recognition and programming is the only unique skill i could learn at school to get recognition)
reality is that bruteforcing on gpu is going to be faster for any reasonable size
Would love to see more reactions vids like this lol
The name is the prime agen!
Love these videos. Thanks.
Now do it in CUDA and actually get O(1) by testing all positions in one cycle.
And when the number of positions exceeds your cores, what then? It isn't O(1), it's still O(n)
Amazing video
hey, had to stop watching the video halfway through so maybe I missed something, but Im mainly referring to the beginning section of the video where you explain different ways to tackle the problem. When you mention dynamic array are you talking about some sort of higher level data structure that I am not familiar with? Just asking because when I hear dynamic array im thinking of a heap allocated array in C which you manipulate fully on your own with malloc and such. I mallocing some size any slower than just going for a static (stack) array. just to be clear arr[4] vs arr* = malloc(...
dynamic array is basically a vector in cpp or a arraylist in java. js and python only use dynamic arrays. i guess another word for it would be a 'resizable array'
Amazing 🎉❤
Bro this thumbnail is devious
Can anybody explain why we use both binary/ 2 bit and also we use 8bit? I am noob. I am asking about the fundamental thing. Any long form answer will be appreciated.
Edit: As I went further into the video, I realized this they are talking about DSA of which I have no idea. But still any easy explanations are welcome. Cheers
What are you using for the diagrams and screencasting tools? They look swish
21:10 what happens in the occurrence where there are 3 duplicates?
any extra duplicates will result in less than 14 bits set.
Two duplicates will result in 12 bits set. (12 unique + 2 duplicates cancelling each other)
Three duplicates will result in 12 bits set. (11 unique + 3 duplicates giving us 1 unique and 2 cancelled)
You want exactly 14 bits in final result
ThePrimmyGen seems like a cool guy
i'm a noob who's only just installed rustc and read only a few pages of the rust book, i have no clue why people approach this problem with a hash set instead of a table or vector
"...by this little youtuber called the pree-mee-a-jen" 🤣
Sometimes performance doesn't really matter at all, sometimes getting features shipped is even more important.
You lost me at bitmask, even though I am also named Benny 😢
I freaking looooooooooooooooooooooooooooooooooooooooooove these videooooooooooooooooooooooooooooooooooooooooos
its more like "making a faster algo" that "making an algo faster"
Where can we find this problem text ?
the name is premi agent
Primi-Agen
The future of computing will eventually move away from algorithms
but what would be a vector in Python? A List?
yeah, python only has dynamic arrays, not static ones i believe
i saw what you did there in start what so should we calll you dr.n now
With your small understanding of Cache hierarchy and SIMD knowledge , I would not call myself an elite programmer ...
Feels like the result he has wasn't even optimized. Why do windows when you can just do left and right markers?
Bro roasted the preemagene
guys how is the case where there are 3 or more of the same characters in the window? that bit will still be set to true, am i missing something?
In that case we won't have 14 bits set to true tho.
Premiumgen is an OK guy
"Just a little RUclipsr "
Don’t know rust so didn’t do much for me
21:50 what if there’s 3 repeated letters? The bit would be 1 again
Correct, but he stated that the only thing that matters is if there are 14 distinct characters (14 ‘1’s)
If there’s 3 repeated letters, that bit would be 1. But there wouldn’t be enough other ‘1’ bits to total to 14
@@ericcoyotl you’re right, thanks for the explanation
informative ..
(y)
Apologies for judging that I thought you just talk like a superior human being with the tone but I can see it's just your accent / tone but you are humble and still Human and not AI (Evidence in 26:00). Subbed and will recommend to... I have no friends to recommend this type of stuff send help :( :( :(
a noob reviewing a noob reviewing some guy's stuff. we really hit rock bottom.
Oh, social media wars. Each one suk a dik
I love you all