1 Billion Rows Challenge
HTML-код
- Опубликовано: 20 сен 2024
- Recorded live on twitch, GET IN
/ theprimeagen
Become a backend engineer. Its my favorite site
boot.dev/?prom...
This is also the best way to support me is to support yourself becoming a better backend engineer.
MY MAIN YT CHANNEL: Has well edited engineering videos
/ theprimeagen
Discord
/ discord
Have something for me to read or react to?: / theprimeagenreact
Kinesis Advantage 360: bit.ly/Prime-K...
Hey I am sponsored by Turso, an edge database. I think they are pretty neet. Give them a try for free and if you want you can get a decent amount off (the free tier is the best (better than planetscale or any other))
turso.tech/dee...
As a Python connoisseur, let me loop through this and get back to you guys in about a millennium.
Though perhaps Mojo might be better for my health
Should be easy with pandas/polars/pyarrow.
@@MrGeometres No external libraries.
If you use asyncio you might solve it in two millenia
As a JS cook, let me pull out my .reduce I will see you in a 0.99 millennium
Craziest thing is the fact that the fastest implementation is less than 6.2 seconds and it keeps improving.
Dotnet ones are going sub-3s last time I saw lmao
A valid C++ implementation is below 0.2s. Does not look good for JVM, does it?
@@vitalyl1327 reading 12 GB file in 0.2s means your SSD read speed is 60 GBps. ORLY?
@@stariyczedun May be realistic if the file is cached in RAM already. 60GBps is a bit more than 2 channels of DDR4 3200 peak throughput, so could be achieved on a 4 channel machine.
@@alexanderdaum8053 my idea of realism is that the answer is already cached as well 😀
Just checked out the repo author, works at a company called Decodable that specialises in real-time stream processing
Definitely not sus lmao
Lmaoooo
It could be a job application
Decodable is powered by Apache Fink which is written in Java and Scala, not sus what so ever
@@eddster2309 Haha I didn't even look that far
Real-time stream processing would allow the file reading to occur on one thread while another thread processes it. As long as the program's calculations are faster than the file IO, it would be just as fast as the file io.
This is 100% for a company
Pffft, what do you meann? No way a company would get people to do job them for free, its just take home test
Absolutely, the choice of Java makes its very obvious.
netflix
I don't see how this would be useful for a company in any way, there's already faster ways to load big data into a useful format
Equally likely is someone just issuing a challenge to settle an argument. Just some guy
The fact that the challenge requires a specific language is what makes it the most suspicious
I think the main point of doing this challenge in Java, is to showcase the performance benefits of using virtual threads over normal java concurrent threads, which were included in the latest major release.
If you wanna showcase Java's performance you gotta compare it with another language.
@@lostsauce0 java versions are another language.
@aiyazmostofa1501 uhh, I hate to be the bearer of disappointing news, but I have witnessed Java beating C in a very predictable manner.
I took a course where we had the task of building Conway's game of life in both C and Java. To everyone's shock, all of our C programs were either tied in terms of speed with Java, or were beat by Java. At the end of the day, it all came down to AOT vs JIT optimizations. Turns out if you give Java enough time, it starts to do some really smart things that C can't.
Well it's more of a Java challenge yeah.
A showcase of how low and hacky can you go with it.
Anyone submitting answers to this should use a non commercial license.
SO TRUE
can you please explain more ?
It’s not about the code but the idea behind
@@daumienebi Because then if a company wants to use the algorithm, they have to make their code open source
@@daumienebi there's a chance projects like this are astroturfing psyop brain rape campaigns by businesses to crowd source their problems exploiting peoples passion and labor to improve that business proprietary closed-source software.
I used to be a Big Data Dev using Java. I'd probably load a page worth of results at a time (whatever page size that fits neatly into JVM memory) and process it in chunks simultaneously using thread pools and queues. Essentially doing a map-reduce style solution.
It's now down to 1.5 seconds. Holy crap!!!!! But amazingly the challenge has now ended so Gunnar Morling ran the top ten on all 32 cores/64 threads and got 0.3 seconds. Bloody hell. Java scales!!!
a nice way to get specific work done for free lol
Let's go Prime, show us some Enterprise Java Coding.
Brazil mentioned, today is a good day!
I believe Tails runs purely from RAM disk for security/privacy reasons (no forensic artifacts left on disk/persistent storage)
Yep
Hey @ThePrimeagen I know you're not the java lover, but this sounds like fun, no? Are you planning on doing this on stream :)?
"You just need four numbers: min, max, sum, count"
And now I feel validated, I had the same thought as a senior developer at Netflix
The big challenge is how to read the file as efficiently as possible. If you have everything in RAM the problem is embarrassingly parallelisable. Each thread can keep there own values for min, max, sum and count per station and just combine everything at the end with a single reduce. You would need some kind of hash map to look up the stations and ideally it would be a perfect hash map (no need for collision handling but that might be premature optimization).
The current second place is the vice president of software development at oracle lol
Same, some company needs this so they created this "FAKE CHALLENGE" in a single language to avoid using a database. Not participating ...
The interesting thing is that there's a lower-bound on the performance you can get with the code still looking friendly. Once you go beyond that, the techniques aren't quite as fun to read, but they're fun to write. :)
I watch this and I think of work where I can watch the letters appear in MS Word, because the file has some pictures in it and some Excel graphs each with like 1000 data points or so.
Technology is truly amazing.
If the VM has 32GB, then it may be cached in memory from the first run (which is discarded), so after that it will go fast
fastest entries do it in one SSD read basically so you won't beat them with this
awk, cut, sort, immediately come to mind when seeing these.
True. I tried the chess results stats problem in Go and then using find, grep and awk. Unix shell won on both wall clock time and CPU time. I tried 3 different approaches in Go, with and without channels and goroutines.
can't wait for someone to write this in C and it's literally just doing 12gb at the speed of dram
Drammn man!
Is not it a point of this whole challenge? To optimise the memory access pattern (it need to write, not just read, and the output may not fit L3)
Nothing prevents you from forking that contest for all languages. Just design cool t-shirt and launch it. Maybe re-use the hosting, instance-type and OS-distribution constraints so that the Java results are comparable with this one. I recall competing in shortest DOS-program to print "Imphobia!" to console, it was fun at least.
Using OCaml (with Base and parallel domains), I managed to reach 55 seconds on an old 4-core i7. I'm new to OCaml so it could probably be optimized more.
My plan is to try Owl on it to see what it can do.
The problem is not CPU but reading the file in a way that maximises SSD read throughput. I did several attempts in Java and the biggest jump was from reading the file using multiple mmaped regions in parallel (like 16 or even 32). With that you cut time from around 2 minutes to 15 seconds. The fastest version then was running in 10 seconds on my laptop so I lost interest, everything else you can do (vector math, improve JVM startup time) won't yield much improvement.
@@stariyczedun Sounds like you know what you're talking about. I have no idea about SSD read optimization so I read the file as a whole into a memory buffer (didn't take more than 20 seconds) and then CPU was actually the only place left to optimize. I suppose that if you parallelize file reads then you can combine parse on the fly and reach much better results. Thanks for the feedback.
@@jonathanjacobson7012 probably my wording was a bit confusing. Yes, you basically need to parallelize reads and parsing \ calculations. I did it with mmap, probably there are other ways to do it. I think it is doable in ocaml as well. Basically, split file into chunks, run a bunch of threads to process those chunks. mmap just provides a nice way to read regions of file completely independently like memory, with OS handling the real SSD reading underneath.
I'd like to see it go toe to toe with C# fully Jitted
8:34 Military operating systems (like the OS the runs the Abrams MBT) work exactly like that. Have since the late 70s
TLDR; find out how much you can avoid using java while still technically using java 🤣🤣
Would love to see you do this Prime :D
someone should creat an x86_64 assembly version to troll java devs
Why would that troll java devs?
@PythonPlusPlus java runs in vm and it has some overhead. Native code can be faster even if it does the exact same thing. The competition is about speed so assembly can beat them
@@zsomborgyenge4359 The competition requires you to use Java. So you won’t be able to enter the competition.
@@zsomborgyenge4359 good luck beating AOT and JIT optimizations when doing assembly by hand
"Burnt-in RAM." Isn't that called ROM? I mean, you could alter the non-burnt bits I guess, but supposing you don't do that, that would just be ROM.
14s is the best when Prime is watching this repo, now the best is 6.159s, crazy
still too slow compared to non-JVM versions
@@thedeemon 2.575s is the best one right now.
I wanna see someone doing this in Excel
Wow. 6.2 seconds!!! Off-heap memory! No GC at all. Java is bloody fast. London stock exchange runs Java and processes 6,000,000 transactions per second on a single thread (LMAX).
On a typical CPU this is a little under 1k cycles per transaction. Considering all the interesting stuff like DB and network would be amortized across chunks that sounds like way more than enough for basic book keeping. Don't get me wrong; you need to be pretty clever and careful in how you do your chunking and scheduling of said interesting stuff so you're not stalling out, but otherwise CPU-wise this isn't a huge workload and could probably run in any language. Maybe not python 😋
Mr primeagen i love that you can automate the upload to your youtube, but can you toss a link to the content you react to. Thx -fellow SD'an
Who will eat all this exposure generated by the very smart guys racing to the top ?
Everything in RAM is just the past of computing before storage was invented
I can't believe .NET 1BRC is faster than Java.
Tinycore Linux is run in ram I believe. Initially stored in a flashdrive or HDD then loaded entirely into ram, is meant to be as minimal as possible.
there's also Alpine Linux
Is this relevant to the video? Did I miss something?
@@marceloferreira8068 although Alpine doesn't live in RAM by default
@@darkdudironaji having the entire disk just be RAM, you already have the file in RAM, as well as the executable, etc
@@LtdJorge Right, but I'm sure this is standardized. So the code is submitted and then run on the same machine. Otherwise a researcher could just use a supercomputer.
only an application programmer would think of something like this. the thought would never cross a data guy's mind, his first thought would and only be how to get this into a database and THEN how do i deal with 900 billion row tables. And that first thought would be start with a bulk insert, whatever i do avoid logging.
I’m working trading company. Similar amount of data is processed under 0.01 ms. The fastest use specialised hardware bypassing cpu and there is below 10ns per data point. 10ns light speed 3 meters …
Our idle is coming, brothers! 🟢🟡🔵
Can you include all the stuff to enable JNI and a compiler in one file and embed some native code?
Gente pera aí, ele ta vindo pro Brasil em maio???
sim
Momento de viajar a Brasil
0:42, if I would take care of this, I would start by rewriting this file to something much smaller. After all, 12GB is too much. Let's say names have average of 8 letters and numbers can go up to 99.9. So they are (8 + 1 (;) + 2 (2 numbers) + 1 (.) + 1 (1 number) + 1(new line char) )*8 = 14*8 = 112 bytes per line. So, writing in a binary file, 4 bits for the last fraction, 7 for the number and ~11 for a number simbolizing the name, which should be searched later in a separated table. So 4 + 7 + 11 = 22 bits, 3 bytes. If 3 is ~2,7% of 112 bytes, it means that those 12 GB would be reduced to 12x0.027 ~= 324 MB.
This would make any kind of search a lot faster.
A single scan is enough so I don't really see the advantage in what you describe.
thats so much reduction
OK, what if you encounter a station name that's 100 chars long? Read the rules
@@CelastrousIt took me like 3 re-reads to realize that he does not proposes to assume max character limit on name and write it like that in binary.
His solution is to store all the names in the separate file. And instead of the actual name, use line index of the name from that file.
The real problem is that he assumes that 11 bits (0 to 2048) should be enough to cover all the names, while rules clearly state that there can be up to 10,000 unique names.
@ar1i_k Ah gotcha
in fact tere are even "in ram" linux distros like tiny core
"same dataset for all submissions" just screams "reverse engineer the dataset to optimize this specific answer"
dude if you think 14 sec is fast what do you think of 1.5 sec which is the winner?
after watching a course video on recursion in java, the language just looks like C + Py structurally. EASILY readible showing synergy of syntax with other languages. Princeton has an entire CS database for programming in java... I'd say it's gonna be on the list of popular languages soonish
I really wish this would be open to other languages - but I guess in a way it is... I just create the data, then write teh cod3 and that's it.
It took me some time to realize the "mean" is just fancy name of "average" so its a bit bad task to illustrate java performance because no memory is involved at all so they can be very fast unless implemented by idiocy. Managed languages can easily number-crunch nearly as fast as native code with very thin margins - but when they fall apart is when you do memory too......
Nothings stopping you from doing it in any language, you just need to generate the data with Java, after that, you can use whatever. If they want results back in Java, well, tough break
The fastest solution is pretty cool actually.
Compress the file in blocks then you can read it faster if you decompress in memory and read parallel blocks. After like 12-20 threads it should be faster than the overhead
Would be fun try this on a bunch of languages to compare
read byte per bytes the file, start processing as soon as possible while still reading and split the process on all available cores, then merge it.
In C, it took around `89ms` .
Dude it is going to be io bound for sure
I thought you solve the challenge, but you only read the challenge 😂 he got me
They now have it for Rust, Go, C++, and others
Limiting it to an odd language like Java just screams of a company challenge lol
The challenge is more about playing with new Java 21 APIs (vector ops, foreign memory access api for mmap).
Is Java now an "odd" language?!
Brazil Mentioned mentioned
What are you doing in Brazil and where are you going? Gonna be in BH for one month starting on February 8th
The fastest you can get this is about 2-3 seconds so there is still a lot of headroom 😄
0:30 How about running Rust compiled to WASM in a JVM WASM runtime? 🤣
Unfortunately, there doesn't seem to be a path directly from Rust to JVM bytecode 😢
I haven't read the rules in detail so this is probably disallowed but since the slowest run is discarded, just store the results and then if the results exist then print them out instantly
I just realized how much similar your voice is to Rick Sanchez from Rick & Morty.
3:46 what do you mean by "sorting is a constant time operation in this case"? How can it be constant time?
reading and calculating the values in this case takes much much longer than it would take to sort the results, and the number of values being sorted isn't based on the input to the program since you could have any number of unique weather stations (to a max of 10 000) in the input file, so it's treated as a constant time operation that happens at the end.
Are you really coming to Brazil in May?
yes
Intel Optane is ram as disk or disk as ram. But it's not really popular, so they ended it.
there are results now with execution times of less than 2 seconds. ouch
1. Use Linux and copy the file to a mounted ram-disk.
2. Have one thread read the file as fast as possible to byte array's.
3 Have multiple threads start processing at different starting points. (use the text bytes for (partial?) 256-tree lookup.)
4. Aggregate and print out the results.
That would be my first attempt.
copying the file to a mounted ram-disk is pointless since you only need to read the contents once. You may as well load the entire file into a buffer instead, and have the data immediately accessible to the program.
@@PythonPlusPlusbut you are copying it manually before executing the program. Cutting the time of moving it to memory out of the execution.
@@PythonPlusPlus Accessing a local byte array is a LOT faster than accessing an abstracted away mapped file that is only readable trough syscalls.
It's better to just copy huge chunks to local arrays than to read each byte on its own.
The syscall overhead of reading single bytes would be huge.
mmap
@@blinking_dodo That was my point…..
companies are getting smarter
Make sure to come to olegário Maciel. Raval Rio bar. You'll be king there. Let Em know Tiago is your holmie
CSV and Java 11 will get this done.
Java NIO buffer reader and in memory db.
why would you need a db instead of a single for loop basically
@thedeemon 16 million lines needs to be chunk read by the Java NIO buffer streamer like 32k at a time and then written into a in memory DB like hazlecast or redis or hsqldb if a single node?
If we want to run clusters of JVM than we need redis and a the file IO needs to know which means ate the 32k lines from the file.
@@chebrubin no external libs
@@chebrubin We don't need no clusters, the data fits in RAM of one machine, and people using C solve it in less than a second (if a file was read once before and cached by the OS). No DB will accept a billion rows this fast.
Prime wouldn’t hire Kyle Kingsbury 😂 I’m dead
Write it as a Rust library and just call it from Java
One BILLION dollars 😮
This problem look like one example from the book about using map-reduce with hadoop 😊
I won't be surprised if hadoop was super slow here compared to a straightforward program that does one thing well. (inspired by "Scalability! But at what COST?" article)
Primgean is coming to Brazil in May?? LESGOOOO
HEY HEY HEY, if you are coming to Brasil you gotta notify before hand so we can prepare to meet you 🙏
Tails OS runs in memory… on a USB stick!
Someone did it in 1sec now...
sounds very fun! It'd be cool if similar challenges like this existed in other languages like C# or Python. Anybody know of any, or where to find them?
Vercel accepts contributions to its Programming Language and compiler Benchmarks.
I'll be waiting for you in May in Brazil :3
Nathanial Forks
are you gonna try it in Rusty?
COME TO BRAZIL
huehuehuehuehuheuhehuehu
At my company we pxe boot Ubuntu and the entire filesystem runs on ram
My boy will come to Brazil lets goooooo
Why only java!? why not any language? with modern 15GB/s ssds and AVX 512 intrinsics in c/c++ this can be done < 1 sec.
please, don't tell me you're going to Rio de Janeiro.
This is really interesting, anyone know of any website or collection of similar more realistic programming performance challenges?
Project Euler, maybe.
@@Blubb3rbub they just focus on correctness not performance
@@EikeSchwass the idea is that you won't get a correct solution in time without being smart about it.
Just load it into SQL database and see how fast it go brrrrr
writing a billion rows to a DB will be too slow compared to a single for-loop accumulating the result
How is 1 billion data points a large amount? Oh Java, you cray
Submit your code as Non-Commercial License guys
Nice video, thanks :)
Write a Java program with one line of code calling an executable compiled from Rust.
A better version of this challenge (not a company sneakily getting people to solve their problem) comes from this video for a sorting challenge where people made it 400 million times faster than the initial Python code.
ruclips.net/video/c33AZBnRHks/видео.htmlsi=ASPe_4enPDK6dUwg
shshshssshh :DDDD you will LOVE the pf firewall. the name of it, at least :D
It's not Brazil mention, it is 1 Bircoin
Gonna be where in Brasil?
People are already on 6 seconds, that's crazy
Any language without support for opening files with mmap() would be at a huge disadvantage.
I think Java NIO can be configured to use mmap.