Virtual Memory: 13 TLBs and Caches

Поделиться
HTML-код
  • Опубликовано: 26 ноя 2024

Комментарии • 69

  • @ryudastuff
    @ryudastuff 6 лет назад +50

    Of all the videos on this playlist, this is the only one that I didn't understand fully. An example of usage would've helped a lot...

  • @SureshIyerr
    @SureshIyerr 5 лет назад +17

    I have been listening to the virtual memory talks of you. Such clear communication. No over explanation, clear and straight to the point

  • @real-investment-banker
    @real-investment-banker 4 года назад +9

    Oh boy the first 10 seconds cleared a doubt which I had for so long and wasn't able to find the answer for it over the net. One of the best video of the series. Thanks man!

  • @selvalooks
    @selvalooks 5 лет назад +2

    Think this was the best explanation one can get in whole lot of internet pages available, thanks a ton!!!

  • @rajeshhariharan7575
    @rajeshhariharan7575 5 лет назад +4

    Thanks David Black-Schaffer...One of the best online tutorials... Keep up the good work.

  • @blueassassinX
    @blueassassinX 4 года назад +5

    So from what I understand after watching this and others on YT is this:
    In VIPT the cache is indexed using offset and it stores the actual data and PA (its called physical tag here).
    We use the VA and TLB to find the PA. At the same time, we use the offset to find data and also PA. Then we compare 2 PA if hit then return data, if miss then we get data from RAM and maybe update cache and TLB

    • @sunshineinwilderness
      @sunshineinwilderness 4 года назад

      Thanks for the explanation! this helps ..

    • @sabitkondakc9147
      @sabitkondakc9147 2 года назад

      it's pretty close but not exact:
      1. MMU checks the index which is set number, in the meanwhile PA is resolved at TLB
      2. TLB is resolved and PA is passed to the set which contains cache lines and cache lines has tags defining them, the PA taken from TLB is compared to the tags of cache lines or blocks.
      3. if there's a match it's a hit , if nothing match eviction policy is run and one of the cache lines is switched by the current query's cache block, also tag is updated on the block ( cache line )
      As name implies, Virtually Indexed, Physically Tagged, tags come directly out of TLB, tag is a part of PA.

  • @kaissbouali6542
    @kaissbouali6542 8 лет назад +84

    Damn that cought/sneeze scared the sh**t out of me!

    • @pashosemwengie5942
      @pashosemwengie5942 8 лет назад +5

      That cough woke up my dog, haha. He woke up suddenly very startled, and he stared at me with widened eyes as if he was telling me, "Da fuck was that?!"

    • @wachowski9525
      @wachowski9525 4 года назад +2

      bro why wouldn't you put the timestamp im scared now

    • @tung-hsinliu861
      @tung-hsinliu861 3 года назад +1

      5:15, to prevent more innocent learners from having a heart attack.

  • @nico-wj1mh
    @nico-wj1mh Год назад

    by far the clearest explanation of this concept

  • @Hentirion
    @Hentirion 6 месяцев назад

    4:23 one point to note is that for VIPT: the TLB is smaller than the cache, so TLB will be faster. So when the caches get the physical tag, the TLB already finished getting the physical adress. worst case they would finish at the "same time".

  • @msalvi6302
    @msalvi6302 3 года назад

    Besides increasing associativity the other way to index the VIPT cache is to assume the untranslated bit will hit. If the index hits in the cache it implies the VA->PA translation for that bit was not modified.
    If the index misses, L2 request is made which would show that the cache line did exist, but the VA->PA bits didn't match. When this happens L1 can create a victim and ask L2 to fill the data at the index specified.
    For example consider a cache with 64B line . [5:0] is block offset. If the macro is 8KB and the page size is 4KB. There are 128 sets. Bits [11:6] don't need translation. Bit[12] VA could either be PA[12] or it could be ~PA[12]. If VA[12] = PA[12] the line will be found in the cache if it exists. If VA[12]= ~PA[12] , the L2 response will indicate a hit on the flip index[12] bit. The core can then ask L2 to move the line from ~PA[12] to PA[12].

  • @Efferto93
    @Efferto93 5 лет назад +17

    I have a feeling that the introduction video about the cache was missed

  • @不合格鸟骑
    @不合格鸟骑 Год назад +1

    😭😭😭😭😭😭😭😭you save me!! you save my final exam!!

  • @rahulkumar72485
    @rahulkumar72485 9 лет назад +3

    Excellent Video......................very useful..........................great work .......................not able to understand why people are not clicking on like video after watching this....................anyway its really helpful to me...............................

    • @alphhan
      @alphhan 7 лет назад +1

      "...not able to understand why people are not clicking on like video after watching this..."
      Possible reason: Because they need to sign in first.

  • @haiphamle3582
    @haiphamle3582 5 месяцев назад

    Excellent content, thanks for providing this!

  • @helpingprograms
    @helpingprograms 8 лет назад

    شرح رائع .... بارك الله فيك ... Great explanation

  • @prem9365
    @prem9365 8 лет назад +1

    This is soo cool dude...your explanation is awesome.

  • @vikasnarayanan8430
    @vikasnarayanan8430 4 года назад

    Amazing explanation, and in such short videos. Thanks!

  • @vincezzz9757
    @vincezzz9757 6 лет назад

    Thank you sir. You really saved my final!!!!

  • @momq1434
    @momq1434 3 года назад

    Best explaination i ever seen

  • @chuyinw1897
    @chuyinw1897 4 года назад +2

    Left address: real address of stuff i'm looking for
    Right address: real address of cached item
    if they are equal: cached item is the one i'm looking for (and get it directly from cache not RAM)
    not equal: cached item is not the one I'm looking for, though they have the same virtual address
    (For 12 bit page offset) cache has 4096 entries (each entry in cache stores 1.data itself 2. physical address of that data)

  • @jallabubble7523
    @jallabubble7523 2 года назад

    Bless you! From HS Mannheim, Germany.

  • @chickomadrid6233
    @chickomadrid6233 2 года назад

    best ever tutor

  • @sahilgandhi9156
    @sahilgandhi9156 6 лет назад

    Superbly Explained !!! Thanks for sharing :)

  • @ziwkovic6141
    @ziwkovic6141 6 лет назад +2

    Since physical and virtual offset are the same, and since we access the cache by that same offset, it seems as if we could easily call this approach physically indexed physically tagged cache. cache has nothing to do with the virtual pages, the tlb does. I guess. had to listen this part several times . the previous chapters are much better explained. in this one, the tags are blurry and not well introduced since the beginning.

  • @huat1998
    @huat1998 7 лет назад +5

    Thanks for the great videos, it really explained everything that I wanted to know !!
    I have one question, for VIPT, what if TLB missed but cache hits ? can they still compare the physical address in that case ?

    • @baljotsingh4013
      @baljotsingh4013 3 года назад

      Then it doesn't matter even if it looks up because we don't do that at the cost of additional time

    • @tsunghan_yu
      @tsunghan_yu 2 года назад

      If there's a TLB miss, how do you know cache hit though? You don't know if the data is from the current PA. I guess in that case it will get the PA from the page table in RAM, refill TLB and get the data from RAM and refill cache.

  • @umairalvi7382
    @umairalvi7382 2 года назад

    What an awesome explanation

  • @DongyunShinelsdy
    @DongyunShinelsdy 8 лет назад +2

    Can you give an example which is very clear with number? e.g.) the specific virtual address, the clear TLB translation and PA Tags.
    Especially I want to see exact example of PA Page and PA Page and their comparison.!!

  • @aakashpreetam7383
    @aakashpreetam7383 6 лет назад

    Loved it ! Thanks a lot.

  • @cyw4662
    @cyw4662 5 лет назад

    It really help me a lot!

  • @nutritionalyeast7978
    @nutritionalyeast7978 5 лет назад

    in VIPT, wouldn't the cache return memory before it is known if it is truly a cache hit? if that's the case, how does the cpu react to finding out it is actually a miss, does it just drop the instruction and restart loading the new memory into the cache? in the other case where the check for a cache hit happens before the cpu gets the memory, wouldn't the speedup over physical addressing be very small or nil?

  • @mircdom4603
    @mircdom4603 Год назад

    Much appreciated!

  • @borisverkhovskiy5169
    @borisverkhovskiy5169 Год назад

    3:20 what am I supposed to remember? did I miss where "tags" are explained?

  • @bjdollcoloredpencil3273
    @bjdollcoloredpencil3273 6 лет назад

    Thank you so much, this video is amazing :) !!!

  • @netaneld122
    @netaneld122 6 лет назад

    Great video, thanks!

  • @gishgos
    @gishgos 9 лет назад

    If it's a 2-way associative cache will the cache return 2 physical tags and checks with a single physical address from the TLB?

  • @prashanthm9829
    @prashanthm9829 4 года назад

    Hi ,
    One question. If 2 programs are using the same VA. That means there has to be a map between which page table belongs to which program .
    This part wasn't covered . And could explain with an example of a program having the same VA.

    • @prashanthm9829
      @prashanthm9829 4 года назад

      Thank you for your wonderful videos. They are quite helpful .
      Thanks alot .

  • @关加加
    @关加加 5 лет назад

    thank you so much .it helps me a lot

  • @aesophor
    @aesophor 5 лет назад

    Sorry for the stupid question: If VIPT cache needs to compare two PAs (one from TLB, the other from Cache), then why not just use TLB?

    • @paulyu6334
      @paulyu6334 3 года назад +1

      you only get another physical address by just using TLB. however cache directly give you the data you need

    • @aesophor
      @aesophor 3 года назад

      @@paulyu6334 感謝大佬

  • @21crus1
    @21crus1 3 года назад

    Does each program need a separate TLB? If not, then how is a single TLB able to translate the virtual addresses that could be from different programs into physical addresses?

    • @harshsharma57
      @harshsharma57 2 года назад +1

      Yes TLB content is flushed on a context switch

  • @neilwood6773
    @neilwood6773 5 лет назад +1

    What about Memory Management Unit?!?!?! :/

  • @SmilerBFC
    @SmilerBFC 8 лет назад

    Would this work the same for ARM ?

  • @RudolfCickoMusic
    @RudolfCickoMusic 9 лет назад

    Please can you make an example video of that ? I don't understand the Tag

    • @davidblack-schaffer219
      @davidblack-schaffer219  9 лет назад

      PianoLife Hi, if you register for the full course (at test.scalable-learning.com, enrollment key YRLRX-25436) you can watch the cache lectures which include detailed descriptions of how the tags work.

    • @rahulyalkur2226
      @rahulyalkur2226 9 лет назад

      David Black-Schaffer With your last question, are you implying that increasing associativity will increase the cache size?
      Thanks,
      -Rahul

    • @davidblack-schaffer219
      @davidblack-schaffer219  9 лет назад +4

      rahul yalkur Rahul, I'm sorry that I don't have time to answer all the questions here. This material is used as the preparation material for my course so we go over those in-class. If you log into the course (at test.scalable-learning.com, enrollment key YRLRX-25436) itself you can take a look at the in-class problems (and their solutions) and see if that helps.

  • @damejelyas
    @damejelyas 6 лет назад

    hi david il can not enroll into your class using the key YRLRX-25436

  • @oideplao7509
    @oideplao7509 4 года назад

    Thank you so muchhh

  • @MPK1881
    @MPK1881 10 месяцев назад

    Question at the end was not clear.

  • @МаксимБеринчик
    @МаксимБеринчик 6 лет назад

    What if we increase page size, not 4kB, but 2M, will the L1 chache be able to grow to this amount? (in case of VIPT)

    • @rishabhshirke1175
      @rishabhshirke1175 6 лет назад +1

      yes, with 2MB page size we get 21 bits to index into cache

  • @dhruvpatel7337
    @dhruvpatel7337 6 лет назад

    Sir i didnot get the difference betwn physical cache & virtual cache coz in virtual cache cpu gets the virtual address in return.
    CPU should get physical address in return?????

    • @nutritionalyeast7978
      @nutritionalyeast7978 5 лет назад +1

      the cpu always gets the memory stored in the cache in return (assuming a cache hit). the difference is the way the cache is addressed. in physical addressing, the cache address matches the physical address on main memory, meaning the virtual address has to be translated to a physical address by the TLB before the cache memory can be accessed. in virtual addressing, the cache address matches the address used for virtual memory, so the cpu can get the result without using a TLB or physical address at all

  • @irfanmanzoor2410
    @irfanmanzoor2410 5 лет назад

    What is Virtual Cache? explain this

  • @Simple-M___zzz
    @Simple-M___zzz 5 лет назад +4

    3:51 *tik tok Memory Montage*

  • @Omele.t.t.e
    @Omele.t.t.e 5 лет назад +2

    Hit or miss i gess they sometimes miss uh

  • @celsiusfahrenheit1176
    @celsiusfahrenheit1176 4 года назад

    I was wearing headphones!!!!!! 😖🤪

  • @decayingSineWave
    @decayingSineWave 4 года назад

    5:16..and its corona time