Oh boy the first 10 seconds cleared a doubt which I had for so long and wasn't able to find the answer for it over the net. One of the best video of the series. Thanks man!
So from what I understand after watching this and others on YT is this: In VIPT the cache is indexed using offset and it stores the actual data and PA (its called physical tag here). We use the VA and TLB to find the PA. At the same time, we use the offset to find data and also PA. Then we compare 2 PA if hit then return data, if miss then we get data from RAM and maybe update cache and TLB
it's pretty close but not exact: 1. MMU checks the index which is set number, in the meanwhile PA is resolved at TLB 2. TLB is resolved and PA is passed to the set which contains cache lines and cache lines has tags defining them, the PA taken from TLB is compared to the tags of cache lines or blocks. 3. if there's a match it's a hit , if nothing match eviction policy is run and one of the cache lines is switched by the current query's cache block, also tag is updated on the block ( cache line ) As name implies, Virtually Indexed, Physically Tagged, tags come directly out of TLB, tag is a part of PA.
That cough woke up my dog, haha. He woke up suddenly very startled, and he stared at me with widened eyes as if he was telling me, "Da fuck was that?!"
4:23 one point to note is that for VIPT: the TLB is smaller than the cache, so TLB will be faster. So when the caches get the physical tag, the TLB already finished getting the physical adress. worst case they would finish at the "same time".
Besides increasing associativity the other way to index the VIPT cache is to assume the untranslated bit will hit. If the index hits in the cache it implies the VA->PA translation for that bit was not modified. If the index misses, L2 request is made which would show that the cache line did exist, but the VA->PA bits didn't match. When this happens L1 can create a victim and ask L2 to fill the data at the index specified. For example consider a cache with 64B line . [5:0] is block offset. If the macro is 8KB and the page size is 4KB. There are 128 sets. Bits [11:6] don't need translation. Bit[12] VA could either be PA[12] or it could be ~PA[12]. If VA[12] = PA[12] the line will be found in the cache if it exists. If VA[12]= ~PA[12] , the L2 response will indicate a hit on the flip index[12] bit. The core can then ask L2 to move the line from ~PA[12] to PA[12].
Excellent Video......................very useful..........................great work .......................not able to understand why people are not clicking on like video after watching this....................anyway its really helpful to me...............................
Left address: real address of stuff i'm looking for Right address: real address of cached item if they are equal: cached item is the one i'm looking for (and get it directly from cache not RAM) not equal: cached item is not the one I'm looking for, though they have the same virtual address (For 12 bit page offset) cache has 4096 entries (each entry in cache stores 1.data itself 2. physical address of that data)
Since physical and virtual offset are the same, and since we access the cache by that same offset, it seems as if we could easily call this approach physically indexed physically tagged cache. cache has nothing to do with the virtual pages, the tlb does. I guess. had to listen this part several times . the previous chapters are much better explained. in this one, the tags are blurry and not well introduced since the beginning.
Thanks for the great videos, it really explained everything that I wanted to know !! I have one question, for VIPT, what if TLB missed but cache hits ? can they still compare the physical address in that case ?
If there's a TLB miss, how do you know cache hit though? You don't know if the data is from the current PA. I guess in that case it will get the PA from the page table in RAM, refill TLB and get the data from RAM and refill cache.
Can you give an example which is very clear with number? e.g.) the specific virtual address, the clear TLB translation and PA Tags. Especially I want to see exact example of PA Page and PA Page and their comparison.!!
in VIPT, wouldn't the cache return memory before it is known if it is truly a cache hit? if that's the case, how does the cpu react to finding out it is actually a miss, does it just drop the instruction and restart loading the new memory into the cache? in the other case where the check for a cache hit happens before the cpu gets the memory, wouldn't the speedup over physical addressing be very small or nil?
Hi , One question. If 2 programs are using the same VA. That means there has to be a map between which page table belongs to which program . This part wasn't covered . And could explain with an example of a program having the same VA.
Does each program need a separate TLB? If not, then how is a single TLB able to translate the virtual addresses that could be from different programs into physical addresses?
PianoLife Hi, if you register for the full course (at test.scalable-learning.com, enrollment key YRLRX-25436) you can watch the cache lectures which include detailed descriptions of how the tags work.
rahul yalkur Rahul, I'm sorry that I don't have time to answer all the questions here. This material is used as the preparation material for my course so we go over those in-class. If you log into the course (at test.scalable-learning.com, enrollment key YRLRX-25436) itself you can take a look at the in-class problems (and their solutions) and see if that helps.
Sir i didnot get the difference betwn physical cache & virtual cache coz in virtual cache cpu gets the virtual address in return. CPU should get physical address in return?????
the cpu always gets the memory stored in the cache in return (assuming a cache hit). the difference is the way the cache is addressed. in physical addressing, the cache address matches the physical address on main memory, meaning the virtual address has to be translated to a physical address by the TLB before the cache memory can be accessed. in virtual addressing, the cache address matches the address used for virtual memory, so the cpu can get the result without using a TLB or physical address at all
Of all the videos on this playlist, this is the only one that I didn't understand fully. An example of usage would've helped a lot...
I have been listening to the virtual memory talks of you. Such clear communication. No over explanation, clear and straight to the point
Oh boy the first 10 seconds cleared a doubt which I had for so long and wasn't able to find the answer for it over the net. One of the best video of the series. Thanks man!
Think this was the best explanation one can get in whole lot of internet pages available, thanks a ton!!!
Thanks David Black-Schaffer...One of the best online tutorials... Keep up the good work.
So from what I understand after watching this and others on YT is this:
In VIPT the cache is indexed using offset and it stores the actual data and PA (its called physical tag here).
We use the VA and TLB to find the PA. At the same time, we use the offset to find data and also PA. Then we compare 2 PA if hit then return data, if miss then we get data from RAM and maybe update cache and TLB
Thanks for the explanation! this helps ..
it's pretty close but not exact:
1. MMU checks the index which is set number, in the meanwhile PA is resolved at TLB
2. TLB is resolved and PA is passed to the set which contains cache lines and cache lines has tags defining them, the PA taken from TLB is compared to the tags of cache lines or blocks.
3. if there's a match it's a hit , if nothing match eviction policy is run and one of the cache lines is switched by the current query's cache block, also tag is updated on the block ( cache line )
As name implies, Virtually Indexed, Physically Tagged, tags come directly out of TLB, tag is a part of PA.
Damn that cought/sneeze scared the sh**t out of me!
That cough woke up my dog, haha. He woke up suddenly very startled, and he stared at me with widened eyes as if he was telling me, "Da fuck was that?!"
bro why wouldn't you put the timestamp im scared now
5:15, to prevent more innocent learners from having a heart attack.
by far the clearest explanation of this concept
4:23 one point to note is that for VIPT: the TLB is smaller than the cache, so TLB will be faster. So when the caches get the physical tag, the TLB already finished getting the physical adress. worst case they would finish at the "same time".
Besides increasing associativity the other way to index the VIPT cache is to assume the untranslated bit will hit. If the index hits in the cache it implies the VA->PA translation for that bit was not modified.
If the index misses, L2 request is made which would show that the cache line did exist, but the VA->PA bits didn't match. When this happens L1 can create a victim and ask L2 to fill the data at the index specified.
For example consider a cache with 64B line . [5:0] is block offset. If the macro is 8KB and the page size is 4KB. There are 128 sets. Bits [11:6] don't need translation. Bit[12] VA could either be PA[12] or it could be ~PA[12]. If VA[12] = PA[12] the line will be found in the cache if it exists. If VA[12]= ~PA[12] , the L2 response will indicate a hit on the flip index[12] bit. The core can then ask L2 to move the line from ~PA[12] to PA[12].
I have a feeling that the introduction video about the cache was missed
😭😭😭😭😭😭😭😭you save me!! you save my final exam!!
Excellent Video......................very useful..........................great work .......................not able to understand why people are not clicking on like video after watching this....................anyway its really helpful to me...............................
"...not able to understand why people are not clicking on like video after watching this..."
Possible reason: Because they need to sign in first.
Excellent content, thanks for providing this!
شرح رائع .... بارك الله فيك ... Great explanation
This is soo cool dude...your explanation is awesome.
Amazing explanation, and in such short videos. Thanks!
Thank you sir. You really saved my final!!!!
Best explaination i ever seen
Left address: real address of stuff i'm looking for
Right address: real address of cached item
if they are equal: cached item is the one i'm looking for (and get it directly from cache not RAM)
not equal: cached item is not the one I'm looking for, though they have the same virtual address
(For 12 bit page offset) cache has 4096 entries (each entry in cache stores 1.data itself 2. physical address of that data)
Bless you! From HS Mannheim, Germany.
best ever tutor
Superbly Explained !!! Thanks for sharing :)
Since physical and virtual offset are the same, and since we access the cache by that same offset, it seems as if we could easily call this approach physically indexed physically tagged cache. cache has nothing to do with the virtual pages, the tlb does. I guess. had to listen this part several times . the previous chapters are much better explained. in this one, the tags are blurry and not well introduced since the beginning.
Thanks for the great videos, it really explained everything that I wanted to know !!
I have one question, for VIPT, what if TLB missed but cache hits ? can they still compare the physical address in that case ?
Then it doesn't matter even if it looks up because we don't do that at the cost of additional time
If there's a TLB miss, how do you know cache hit though? You don't know if the data is from the current PA. I guess in that case it will get the PA from the page table in RAM, refill TLB and get the data from RAM and refill cache.
What an awesome explanation
Can you give an example which is very clear with number? e.g.) the specific virtual address, the clear TLB translation and PA Tags.
Especially I want to see exact example of PA Page and PA Page and their comparison.!!
Loved it ! Thanks a lot.
It really help me a lot!
in VIPT, wouldn't the cache return memory before it is known if it is truly a cache hit? if that's the case, how does the cpu react to finding out it is actually a miss, does it just drop the instruction and restart loading the new memory into the cache? in the other case where the check for a cache hit happens before the cpu gets the memory, wouldn't the speedup over physical addressing be very small or nil?
Much appreciated!
3:20 what am I supposed to remember? did I miss where "tags" are explained?
Thank you so much, this video is amazing :) !!!
Great video, thanks!
If it's a 2-way associative cache will the cache return 2 physical tags and checks with a single physical address from the TLB?
Hi ,
One question. If 2 programs are using the same VA. That means there has to be a map between which page table belongs to which program .
This part wasn't covered . And could explain with an example of a program having the same VA.
Thank you for your wonderful videos. They are quite helpful .
Thanks alot .
thank you so much .it helps me a lot
Sorry for the stupid question: If VIPT cache needs to compare two PAs (one from TLB, the other from Cache), then why not just use TLB?
you only get another physical address by just using TLB. however cache directly give you the data you need
@@paulyu6334 感謝大佬
Does each program need a separate TLB? If not, then how is a single TLB able to translate the virtual addresses that could be from different programs into physical addresses?
Yes TLB content is flushed on a context switch
What about Memory Management Unit?!?!?! :/
Would this work the same for ARM ?
Please can you make an example video of that ? I don't understand the Tag
PianoLife Hi, if you register for the full course (at test.scalable-learning.com, enrollment key YRLRX-25436) you can watch the cache lectures which include detailed descriptions of how the tags work.
David Black-Schaffer With your last question, are you implying that increasing associativity will increase the cache size?
Thanks,
-Rahul
rahul yalkur Rahul, I'm sorry that I don't have time to answer all the questions here. This material is used as the preparation material for my course so we go over those in-class. If you log into the course (at test.scalable-learning.com, enrollment key YRLRX-25436) itself you can take a look at the in-class problems (and their solutions) and see if that helps.
hi david il can not enroll into your class using the key YRLRX-25436
Thank you so muchhh
Question at the end was not clear.
What if we increase page size, not 4kB, but 2M, will the L1 chache be able to grow to this amount? (in case of VIPT)
yes, with 2MB page size we get 21 bits to index into cache
Sir i didnot get the difference betwn physical cache & virtual cache coz in virtual cache cpu gets the virtual address in return.
CPU should get physical address in return?????
the cpu always gets the memory stored in the cache in return (assuming a cache hit). the difference is the way the cache is addressed. in physical addressing, the cache address matches the physical address on main memory, meaning the virtual address has to be translated to a physical address by the TLB before the cache memory can be accessed. in virtual addressing, the cache address matches the address used for virtual memory, so the cpu can get the result without using a TLB or physical address at all
What is Virtual Cache? explain this
3:51 *tik tok Memory Montage*
Hit or miss i gess they sometimes miss uh
I was wearing headphones!!!!!! 😖🤪
5:16..and its corona time