Maintaining Cache Coherence with MESI

Поделиться
HTML-код
  • Опубликовано: 26 ноя 2024

Комментарии • 14

  • @gongian1261
    @gongian1261 5 дней назад

    Thanks for the diagram! It really helps me understand the complicated different cases. Although, I still have a question about the SHW when transitioning from E to I. I noticed you used "read with the intention to write". How does the intention detected immediately? Otherwise, wouldn't it be a two steps transition, a read miss changes a cache line from E to S, and then the following write causes it to change from S to I?
    Or when you say SHW, it actually means a write miss issued by other caches? Because a write miss issued by antoher cache, will cause the "read" from this cache with E state, to the "missing" cache, and it's a hit in the view of the "missing" cache?

  • @BeanJuice1K
    @BeanJuice1K Год назад

    Hi professor, this is a wonderful video, thanks for the ramp-up!

  • @koustav2826
    @koustav2826 7 месяцев назад

    When CPU 1 performs the Read operation will it fetch the new data which is modified by CPU 2 or it will get the old data that was there before CPU 2 modified it?

    • @nitinagrawal6637
      @nitinagrawal6637 6 месяцев назад

      If CPU-2 modifies the data then it will invalidate the data with CPU-1, so CPU-1 will be fetching the data from CPU-2 cache.

  • @nitinagrawal6637
    @nitinagrawal6637 6 месяцев назад

    Thanks for nice explanation. I have doubt about if both CPUs issue write request to the same data location, then how it is resolved? As it seems to be the race condition then do I need to synchronize this scenario or what?

    • @JacobSchrum
      @JacobSchrum  6 месяцев назад

      It absolutely would be a race condition, and you would need to assure correct behavior with synchronization at the level of code. However, this coherence protocol would assure that any subsequent reads make sense, meaning that you can't do two simultaneous reads and get different values.

  • @DhananjaySureshGade
    @DhananjaySureshGade 8 месяцев назад

    In Invalid state, does modified cache update the memory or it will just update the another shared cache?

    • @JacobSchrum
      @JacobSchrum  7 месяцев назад

      If a cache entry enters an Invalid State, then its contents will never be written back to memory. A cache line enters an Invalid state when the shared data is written/modified within another cache. So, the cache contents that eventually get written back to memory will come from a Modified or Shared cache line that corresponds to the same one that was Invalidated in some other cache.

  • @عدنانعمرعدنان-غ1ط
    @عدنانعمرعدنان-غ1ط 8 месяцев назад

    Thank you sir

  • @hamdaniibrahim8693
    @hamdaniibrahim8693 6 месяцев назад

    ESI

  • @zxuiji
    @zxuiji Год назад

    Uh, no cache needs to snoop on another, they can all just pay attention to what's on the bus, if the address matches what it's working with then it can do the usual. I would argue however that both the CPU & cache should not be invalidating anything, if they're told to write then they write, f**k the data consistency, that's for the dev EXCLUSIVELY to sort out. The only thing that needs care is the main lock used for syncronising software locks, a simple chip looping through the each core's dedicated inputs to it is enough to decide who gets the main lock. Anywhere else the hardware should not care at all about data consistency, that's fully a problem for software, if at all.

    • @JacobSchrum
      @JacobSchrum  Год назад +1

      At 1:34 I say that the caches are snooping on signals sent by the caches to each other, and these signals are indeed sent on the bus, but I suppose I could have been more precise with my language. As for whether or not MESI or snoopy protocols in general are any good is an issue you need to take up with computer engineers. The point of this video is to explain a protocol that is used in actual multicore systems. However, if by dev you mean the person writing user programs, then they most definitely should not be responsible for the contents of caches. Cache contents are meant to be invisible/transparent to the programmer in most cases, since user programs work at the level of memory, not the cache (though there are a few exceptional commands that allow for some cache manipulation, and these have led to some scary back channel exploits).

    • @zxuiji
      @zxuiji Год назад

      @@JacobSchrum No, that one scenario of which to write SHOULD be the devs problem as it's THEIR data it effects, should be caught via the signal handler with something like SIGCLASH, then the dev can check what clashed and decide which to keep or whether to abandon them complety, as far as they're concerned it's still normal memory

  • @88564894654984561653
    @88564894654984561653 Год назад +4

    Goat