No, it is not a better option - at least not yet. I have tried to use it in my current project, but I had to abandon this path. It's missing essential features for an L1-L2 cache, (like the backplane) and the API lacks some that would cost nothing to add, like a simple "get." No wonder it is still in preview after .NET 9 was released a while ago. However, it had its impact already: with the push from the good ideas in it (the flags and the tagging), for example, the author of FusionCache improved his library by adding those (check out the preview version). Maybe others will follow suit.
@ also how you would setup a local dev environment to test and etc. Would it be best to setup a local docker redis instance or connect directly to ElastiCache from dev machine which seams to be somewhat convoluted
I would like to know in a clean architecture, can we use hybridcach to store informations for the AutorizationHandler? Where we could get some values in order to grant access or revoke? Do you know if the logic can be used in an interface and served as scope across the logic? But still preserve the data in the cash?
Have you compared performance of HybridCache vs IMemoryCache and IDistributedCache. Interesting how much overhead we get with locking in HybridCache. Also a FusionCache is a much better option atm, has much more features and is stable for years. At least atm
Hmm... What do you mean by "output caching"? The term "output caching" applies, for example, to web pages _dynamically_generated_ on the webserver. But there is nothing being generated here. So, it is just caching. Returned value IS the output value!
The content is great. However, if 1,000 requests hit the endpoints simultaneously, it’s likely that multiple requests will access the database, not just one. Hopefully, you have other videos on caching that can be applied to real-world scenarios.
@@MilanJovanovicTech This is a common issue called the "cache stampede" or "thundering herd" problem. A cache stampede (or thundering herd) occurs when multiple requests try to access a cached item that has expired at the same time, causing multiple concurrent attempts to regenerate the same cached value. This can lead to: 1. Excessive database load as multiple requests try to fetch the same data 2. Potential race conditions 3. Performance degradation To prevent cache stampede with IMemoryCache, you need to implement additional patterns such as: 1. Sliding cache lock pattern 2. Using SemaphoreSlim for synchronization 3. Implementing a background refresh before expiration 4. Using stale-while-revalidate pattern Microsoft recommends using IMemoryCache.GetOrCreate or GetOrCreateAsync methods, but these alone don't completely solve the stampede problem in high-concurrency scenarios.
To prevent multiple concurrent requests from hitting the database for the same key, we can implement a distributed lock pattern using Redis and RedLock.
@@деменция-н4п i got it, so you said if in other thread i changed the data i need to add an extra logic to mark that cached information as old o removed it?
That's why you configure cache expiration. Data is served from the cache until it expires or becomes invalidated. Once the cache expires, the updated data can be retrieved. If you want to see changes to your data immediately, you would need to check the server every time. However, doing so defeats the purpose of using the cache-aside pattern.
There are multiple strategies for invalidating the cache. None of them is ideal - each one has their own pros and cons. The simplest ones are manual invalidation and time-based invalidation. More advanced are based on some sort of an external agent monitoring and updating the values.
Want to master Clean Architecture? Go here: bit.ly/3PupkOJ
Want to unlock Modular Monoliths? Go here: bit.ly/3SXlzSt
You're really good at this, great video.
Thank you very much!
Great content Milan, thanks! 🙏
My pleasure!
Really interesting 🤔 Have to investigate this more. The hybrid version indeed seems to be really useful.
Yeah it's awesome
really love this one ❤
it would be helpful a video how to use hybrid cache as a caching behavior in Mediator Pipeline behavior in Clean Architecture.
That's a great idea for a future video!
That is a big improvement on IM and ID
For sure
No, it is not a better option - at least not yet. I have tried to use it in my current project, but I had to abandon this path. It's missing essential features for an L1-L2 cache, (like the backplane) and the API lacks some that would cost nothing to add, like a simple "get." No wonder it is still in preview after .NET 9 was released a while ago. However, it had its impact already: with the push from the good ideas in it (the flags and the tagging), for example, the author of FusionCache improved his library by adding those (check out the preview version). Maybe others will follow suit.
The backplane is working?
In HybridCache? No support there. In FusionCache it is, as far as my use-cases go.
Do you use FusionCache? A much better option
Thanks for a great video. Suggest a video showing how to use HybridCache with AWS ElastiCache
Great suggestion!
@ also how you would setup a local dev environment to test and etc. Would it be best to setup a local docker redis instance or connect directly to ElastiCache from dev machine which seams to be somewhat convoluted
Best Tutorial!
Wow, thanks!
I would like to know in a clean architecture, can we use hybridcach to store informations for the AutorizationHandler? Where we could get some values in order to grant access or revoke? Do you know if the logic can be used in an interface and served as scope across the logic? But still preserve the data in the cash?
If you're using HybridCache in the implementation - then yes
@@MilanJovanovicTech great, I just done it and you are right, surprisingly Chat GPT was wrong
Have you compared performance of HybridCache vs IMemoryCache and IDistributedCache. Interesting how much overhead we get with locking in HybridCache.
Also a FusionCache is a much better option atm, has much more features and is stable for years. At least atm
I think you're right about FusionCache.
Didn't do a performance comparison
Nice video. Could you please show us how to do this for output caching and how to invalidate it
Ok I will try
Hmm... What do you mean by "output caching"? The term "output caching" applies, for example, to web pages _dynamically_generated_ on the webserver. But there is nothing being generated here. So, it is just caching. Returned value IS the output value!
Is HybridCache a lite version of FusionCache?
If so, what are the advantages of using HybridCache over FusionCache?
Thanks!
Different things. FusionCache had this functionality way before. HybridCache is MSFT's take on it.
So is SlidingExpiration officially not part of the HybridCacheOptions?
Seems not:
- github.com/dotnet/aspnetcore/issues/56754
- github.com/dotnet/extensions/issues/5649
The content is great. However, if 1,000 requests hit the endpoints simultaneously, it’s likely that multiple requests will access the database, not just one. Hopefully, you have other videos on caching that can be applied to real-world scenarios.
@@HoàngPhan-y7u If it's a 1000 requests for the same key, only one will hit the database.
@@MilanJovanovicTech This is a common issue called the "cache stampede" or "thundering herd" problem.
A cache stampede (or thundering herd) occurs when multiple requests try to access a cached item that has expired at the same time, causing multiple concurrent attempts to regenerate the same cached value. This can lead to:
1. Excessive database load as multiple requests try to fetch the same data
2. Potential race conditions
3. Performance degradation
To prevent cache stampede with IMemoryCache, you need to implement additional patterns such as:
1. Sliding cache lock pattern
2. Using SemaphoreSlim for synchronization
3. Implementing a background refresh before expiration
4. Using stale-while-revalidate pattern
Microsoft recommends using IMemoryCache.GetOrCreate or GetOrCreateAsync methods, but these alone don't completely solve the stampede problem in high-concurrency scenarios.
To prevent multiple concurrent requests from hitting the database for the same key, we can implement a distributed lock pattern using Redis and RedLock.
I had always an issue with the cache and is how do you know if the data has changed or not?
when you change the data you'll know that data changes)
@@деменция-н4п i got it, so you said if in other thread i changed the data i need to add an extra logic to mark that cached information as old o removed it?
That's why you configure cache expiration. Data is served from the cache until it expires or becomes invalidated. Once the cache expires, the updated data can be retrieved. If you want to see changes to your data immediately, you would need to check the server every time. However, doing so defeats the purpose of using the cache-aside pattern.
Clear the cache when making the update (simplest solution)
There are multiple strategies for invalidating the cache. None of them is ideal - each one has their own pros and cons. The simplest ones are manual invalidation and time-based invalidation. More advanced are based on some sort of an external agent monitoring and updating the values.