What's the easiest way to tell fetch to always honor the cache control headers from the API? Would that be ergonomic when using API routes? As a backend guy, it feels more intuitive to let the data owner (the API) tell for how long it should be cached.
It'd be nice if the ProNextJS course mentioned had a deep diver section on Middleware, e.g. setting cookie on request (not response) is not trivial and not documented
Really good video but what about the client-side data caching? I wanted data caching for at least a specific time. like if I move between routes it should not recall that API again instead show the cached data. I have basic implementation of react query but seems like it is not working out. even explicitly mentioned staleTime property. If you/anyone here know how to do this then let me know
Random thought, suppose I have a endpoint that query CMS to get 10 items and query backend to get the price of available 8 items. Now I’ve to merge and filter both the API to get the data ans price of those 8 items only. How do you cache that scenario?
@@jherr Yes, but with on-demand ISR there was no problem. that the product was updated from the admin panel, but it has not yet been updated on the site, since revalidate is set to 60, so for another 60 seconds the product will have outdated information, but with on-demand ISR you could set revalidate to 24 hours and not worry that the product will not be updated, because as soon as the product was updated, a request was made to the product page and it was immediately updated)
@@astralking6162 But with ISR you can't control what data on the page is cached at what level. You could send along cache control headers from the API. But if you're doing ISR with server functions those don't have persistent fetch cache. So you'll always be updating all the data at the fastest SLA.
@@jherr Is it a good practice to always wrap server components that are asynchronous with suspense, even that they are cached. Or it is overwhelming? No clear explanation about that in the official docs
@@SaltyRain1 I can't say for sure. But it just falls back onto the semantics of Suspense regardless of the whether it's cached because the cache might not hit. So do you want the render to block on that component or not, in the case of a cache miss. That's the question.
whoever pushed for "use cache" api should be fired. why in the world would you use magic strings for anything. magic strings that influence how infra behaves no less. using normal functions were not cool enough?
Thank you for taking the time to share your expertise-it's greatly appreciated!
Thanks!
Finally. Been waiting for this demo
Nothing beats Pages Router caching. So clear and simple.
This sharing deserves a lot of visibility, thk you !
Thanks for this clear and detailed explanation
Thanks Jack. We love you.
Like always enjoy the content.
Awesome video!
Thanks this is so helpful
Thank you so much for the video
What a great video ⚡
great video.
What's the easiest way to tell fetch to always honor the cache control headers from the API? Would that be ergonomic when using API routes? As a backend guy, it feels more intuitive to let the data owner (the API) tell for how long it should be cached.
Honestly, the standard, non experimental version feels more intuitive for me.
Hi, are you planing on doing black Friday deal on your next js pro course ? :)
Jack, how do you get your Terminal @3:31 to look so bloody awesome?
It'd be nice if the ProNextJS course mentioned had a deep diver section on Middleware, e.g. setting cookie on request (not response) is not trivial and not documented
thanks for sharing
Really good video but what about the client-side data caching? I wanted data caching for at least a specific time. like if I move between routes it should not recall that API again instead show the cached data.
I have basic implementation of react query but seems like it is not working out. even explicitly mentioned staleTime property.
If you/anyone here know how to do this then let me know
Random thought, suppose I have a endpoint that query CMS to get 10 items and query backend to get the price of available 8 items. Now I’ve to merge and filter both the API to get the data ans price of those 8 items only. How do you cache that scenario?
As i know, pages router had a thing like on-demand ISR, was a cool feature, i was implementing it to my project)
Again though, the ISR granularity is a whole page.
@@jherr Yes, but with on-demand ISR there was no problem. that the product was updated from the admin panel, but it has not yet been updated on the site, since revalidate is set to 60, so for another 60 seconds the product will have outdated information, but with on-demand ISR you could set revalidate to 24 hours and not worry that the product will not be updated, because as soon as the product was updated, a request was made to the product page and it was immediately updated)
@@astralking6162 But with ISR you can't control what data on the page is cached at what level. You could send along cache control headers from the API. But if you're doing ISR with server functions those don't have persistent fetch cache. So you'll always be updating all the data at the fastest SLA.
Honestly, of late it feels there is no one at the wheel at Vercel in terms of API design. What a mess. Great explanation though
In my app i have tons of data fetching use cases(around 100), do i need to actually place 'use cache' and the timer on every single one?
🤔 you can keep all in a single ‘lib’ ‘fetchers’ file and pop the directive at the 🔝❔
But, that would apply same caching behavior for all!
Out of curiosity, how do you do it currently?
you are the best !
Where does requestTime come from? Is that a Next augmentation?
No. It’s returned from the API.
you technically could wrap the cached components in suspense as well correct? it’s just not mandated cause the loading speed would be really fast?
Yeah, absolutely.
@@jherr Is it a good practice to always wrap server components that are asynchronous with suspense, even that they are cached. Or it is overwhelming? No clear explanation about that in the official docs
@@SaltyRain1 I can't say for sure. But it just falls back onto the semantics of Suspense regardless of the whether it's cached because the cache might not hit. So do you want the render to block on that component or not, in the case of a cache miss. That's the question.
Is this kinda PPR from aside?
Maybe? I'd have to think about that for a bit.
hey jack, is the caching with "use cache" works on vps? not just on vercel
Yes. Caching works on VPS.
@@jherr VPS? (sry I'm noob)
@ no worries. Virtual Private Server. Basically a box on the internet that you install on and go. Really he means any non-Vercel deployment.
@jherr like EC2? (thx for da help)
@ yes. EC2 or ECS or EKS or just a $4.99/mo hosting site.
4:35 Prices are getting higher even in study projects 😢😊
Is it me or Jacks lips are always not synced properly 😅 Content good as always tho!
we need 1 more explanation, the react built in `cache` experimental function
well, anything that includes so called 'cache" is something to be aware of. Its a trade between speed and server workload.. Cahing !== optimization.
whoever pushed for "use cache" api should be fired. why in the world would you use magic strings for anything. magic strings that influence how infra behaves no less. using normal functions were not cool enough?
I've never used next before, but a directive seems like an absurd way of going about it
Why not just a function?
Directives have 🫘 there and even JS ‘use strict’ from back in the day. Not that 🧐.
If you don’t 👍🏾, don’t use it.
@@adityaanuragi6916Maybe use the tech for s bit first before 💩-posting and/or hating on it.
@@codefinity It was a bad idea back then, too.
Just use tanstack; infinitely better DX with far less gotchas and black box stuff in it...