No More Concurrency Chaos

Поделиться
HTML-код
  • Опубликовано: 22 дек 2024

Комментарии •

  • @CristianBilu-q4n
    @CristianBilu-q4n 2 месяца назад +1

    Wansn't it easier to have a normal mutex and call .Unlock() after the expensive operation? I think in the end you got to the exact place but with a new library in your project.

    • @CristianBilu-q4n
      @CristianBilu-q4n 2 месяца назад +2

      Nah, i am wrong here.
      1. This is a library from std not external so no new library in the project.
      2. My solution will cause one key to lock the entire cache which is not optimal.

  • @bernardcrnkovic3769
    @bernardcrnkovic3769 2 месяца назад +1

    wouldn't the result be the same if you just have two locks, one for rw on cache, and another for expensive call block? the idea is then that you check if cache was perhaps set just before you entered the 'expensive call block' and skip in that case

    • @bionic_batman
      @bionic_batman 2 месяца назад

      I think so. In any case goroutines are still forced to wait till s.sg.Do statement returns something.
      If api call/data fetching was done while write lock is in place no operations would've been executed at the same time
      To me it looks like this library is needed if you don't want to bother with manual locks but if you already have them then you can use them just fine.

  • @dank3k
    @dank3k 2 месяца назад +1

    I'm not a go developer at all so I have zero knowledge on this, but this is - more or less - called the 'critical path' in parallelization. I'll go ahead and guess that GO supports semaphores & locks - why not just use that to synchronize over the area?

    • @adibhanna
      @adibhanna  2 месяца назад +1

      totally! Go does have those, and I think they’re using it in this library. It’s just a simpler interface for some usecases