Distributed Pure Functions by Richard Feldman
HTML-код
- Опубликовано: 15 сен 2024
- There's a lot of hand-waving when it comes to scaling purely functional code. It would be easy to get the impression that as long as all your functions have no side effects, and all your data is immutable - say, because the programming language guarantees it - then distributing the work across cores and machines becomes trivial. Theoretically, a purely functional language could even parallelize workloads automatically for you behind the scenes!
Naturally, things get trickier in practice. This talk dives into the practical considerations of distributing functions that are known to be pure. How much can really be done automatically? How do you avoid the accidentally slowing things down when the coordination costs get too high? What are the tradeoffs and edge cases involved?
Come find out what happens when pure functions get distributed!
x.com/rtfeldman
Talk from Systems Distributed '24: systemsdistrib...
Join the chat at slack.tigerbee...
I’ve come to have high expectations for Richard Feldman talks, and I’m yet to be disappointed. Incredible talk!
I've watched like 3 and I'm now looking at picking up a functional language
Didn’t expect to see Richard Feldman on this channel. Awesome as always! 😍
Richard Feldman is half the reason to watch any talk by Richard Feldman. Not sure if it already does, but Roc might need to pick a niche first and then expand from there.
I wanna know if the Roc team will ever make a JavaScript platform or will Richard always say "Use Elm"
Commutativity is not the important factor for the reduce function; assiciativity is.
well, 8:25 thats obvious because when you parallelize you are adding communication problem, however, the reason you add concurrency is to get fault-tolerance and high availability.
Supercomputers simply replace a compute limitation with a communication limitation.
well, data is always immutable, what it is mutable is rw storage
2021 Sooo what's changed since?