Distributed Pure Functions by Richard Feldman

Поделиться
HTML-код
  • Опубликовано: 15 сен 2024
  • There's a lot of hand-waving when it comes to scaling purely functional code. It would be easy to get the impression that as long as all your functions have no side effects, and all your data is immutable - say, because the programming language guarantees it - then distributing the work across cores and machines becomes trivial. Theoretically, a purely functional language could even parallelize workloads automatically for you behind the scenes!
    Naturally, things get trickier in practice. This talk dives into the practical considerations of distributing functions that are known to be pure. How much can really be done automatically? How do you avoid the accidentally slowing things down when the coordination costs get too high? What are the tradeoffs and edge cases involved?
    Come find out what happens when pure functions get distributed!
    x.com/rtfeldman
    Talk from Systems Distributed '24: systemsdistrib...
    Join the chat at slack.tigerbee...

Комментарии • 10

  • @Flourish38
    @Flourish38 25 дней назад +20

    I’ve come to have high expectations for Richard Feldman talks, and I’m yet to be disappointed. Incredible talk!

    • @jeremymcadams7743
      @jeremymcadams7743 25 дней назад +2

      I've watched like 3 and I'm now looking at picking up a functional language

  • @dwylhq874
    @dwylhq874 25 дней назад +5

    Didn’t expect to see Richard Feldman on this channel. Awesome as always! 😍

  • @ally_jr
    @ally_jr 22 дня назад +1

    Richard Feldman is half the reason to watch any talk by Richard Feldman. Not sure if it already does, but Roc might need to pick a niche first and then expand from there.

  • @laughingvampire7555
    @laughingvampire7555 14 дней назад +1

    I wanna know if the Roc team will ever make a JavaScript platform or will Richard always say "Use Elm"

  • @Gilgamesh557
    @Gilgamesh557 21 день назад +3

    Commutativity is not the important factor for the reduce function; assiciativity is.

  • @laughingvampire7555
    @laughingvampire7555 14 дней назад

    well, 8:25 thats obvious because when you parallelize you are adding communication problem, however, the reason you add concurrency is to get fault-tolerance and high availability.

    • @kellymoses8566
      @kellymoses8566 11 дней назад

      Supercomputers simply replace a compute limitation with a communication limitation.

  • @laughingvampire7555
    @laughingvampire7555 14 дней назад

    well, data is always immutable, what it is mutable is rw storage

  • @awnion
    @awnion 25 дней назад

    2021 Sooo what's changed since?