It seems like your usage is down. Let’s monitor and see what went down here. It’d be a great learning if it all comes down to new Date().
I don’t know if it is, that change went in 2 days ago on 9/17 at like 10 a.m. and then the add force cache and remove it also happened on 9/17, just in the evening.
So we are back at more writes than reads in the last 12 hours
To give some context on the changes that have happened in the last 12 hours:
- An article published at
2025-09-19T13:06:00.000Zand then republished (updated) at2025-09-19T17:38:43Zas well as at2025-09-19T17:17:22.802Zand then republished (updated) at2025-09-19T17:41:55Z - A deploy at
Sep 19, 2025 at 11:02:49 AM MST
Ever since then, writes have been climbing. It looks like there was a spike around11:05-11:10 a.m., so I guess corresponds with the new deploy as the deploy took about 2-3 minutes.
Adding another screenshot of the last 12 hours where I have 1k writes and 270 reads. In the last 12 hours for the website, there has been 2 deploys to production, one 10 hours ago and one 3 minutes ago (since writing this). There hasn’t been a new article published since the ones mentioned in the above response, so 2025-09-19T17:38:43Z being the latest an article was published today.
Hi @jamesrsingleton, thanks for sharing all the new findings. Deploys can definitely increase cache writes because every new deploy has a new cache to avoid stale data.
I’m still of the belief that SanityFetch has to do a lot with this because the last time I tested ISR stuff with local data and deployed to Vercel. Everything worked as expected.
Did you get a chance to use the time based caching in SanityFetch? I see no harm in trying.
Hi @anshumanb,
No I have not tried the time based caching because my assumption is it’s going to be the same as force-cache where no updates happen until that time is up. That would mean the home page would not get the latest article, that would also mean that none of our article grouping landing pages would get updated etc. So that would be the harm is no updates happen on the site. The reasoning is the sanityFetch coming off defineLive does not accept a revalidate, this is what it accepts
query: QueryString
params?: QueryParams | Promise<QueryParams>
tags?: string[]
perspective?: Exclude<ClientPerspective, 'raw'>
stega?: boolean
/**
* @deprecated use `requestTag` instead
*/
tag?: never
requestTag?: string
The sanityFetch you see accepting revalidate in the next-sanity README is a custom function you have to build if you’re not using next-sanity/live.
Out of 4.3k calls to Sanity, 4k of them are cached within 12 hours during the day.
I mean even looking at my edge caching, less than half of the /[slug] pages, are hitting edge cache and that’s overnight while nothing is happening on the site.
{
source: '/((?!api/).*)',
headers: [
{
key: 'Cache-Control',
value: 'max-age=0, s-maxage=86400',
},
],
},
This is what I have in my next.config.ts, not sure if it’s important but ![]()
Hi @jamesrsingleton, I see. I’m wondering what else could be the reason because writes should be same but invalidation is too frequent. Is there a way to disable the Sanity tags revalidation and use a custom path based revalidation approach? Did you try any other changes?
Ok so I don’t think it’s the sanityFetch from next-sanity/live that is causing it. Apparently someone needs to be VISITING the actual page that is getting revalidated via tags to even kick off its revalidation. I confirmed this multiple times where I published an update to an article, observed no logs in Vercel, visited the URL, observed the change had not be made.
Then while sitting on that URL, I made another change to the article and published it. This time I noticed logs in Vercel for the SanityLive revalidate tag. After refreshing the change from before and the new change were there.
The URL in question was https://www.redshirtsports.xyz/south-dakota-state-football-2025-season-preview
In the screenshot above, you can see my first visit. It causes an ISR cache update (for some reason, I have no idea), then two subsequent visits which were me refreshing to see if the change had somehow been made. Then you see the POST with <SanityLive /> revalidated tag: sanity:s1:jj/CuA which actually ended up revalidating 4 separate tags, then you see my next visit which once again resulted in an ISR Cache update.
I then visited it again while writing this and one results in a 304 and another results in a 200. Both were cache hits so not sure why one was a 304 and one a 200.
![]()
Hi @jamesrsingleton, about the 304 and 200, I’ve asked the team. It looks weird because both were edge cache hit. Now I’ve a theory, do you have queries at layout.tsx level that depend on revalidateTag invocations? For example, when a page is created or edited? If that action causes a query revalidation on the layout level then all pages under the layout will need a revalidation, which might be the cascade ISR writes we see.
What do you think?
So not specifically in the layout.tsx. However, I have some queries that are in the Navbar component that would end up getting revalidated and that is in the layout.tsx, would that trickle up and potentially fall under what you’re mentioning? The query in question basically grabs any new sport and division and conference that gets written about.
client.fetch(
globalNavigationQuery,
{},
{
cache: 'force-cache',
next: {
revalidate: 604800,
},
},
),
client.fetch(
queryGlobalSeoSettings,
{},
{
cache: 'force-cache',
next: {
revalidate: 604800,
},
},
)
So I just pushed a change to the navbar to do this instead of using sanityFetch to see if that helps at all.
Well unfortunately, I don’t think that was it. The change went out over an hour ago and we are at a 77% cache miss in the last hour with 16x more writes than reads.
Ok I am a little confused on when an ISR update/write happens. I just saw a request come in for /socon-hoops-1990s-decade-of-glory-mocs-are-born-winners. However, that page has not been touched in forever. But something happened to trigger it to be updated ![]()
Hi James, thanks for sharing the update. About the last request you shared: we don’t keep logs beyond 1 hour for hobby customers, so it’s hard to look back and see what could have caused it.
I’m reaching out to other support staff in our team, who may have more experience with ISR in SanityLive. Hopefully we are able to help some insights.
Thanks! I also have log drains setup for this project to Axiom. I think they were grandfathered in due to Vercel removing them from the free tier after allowing them.
That’s nice. We can check there as well for some specific routes, so let’s say /something, how many times it is being served from ISR cache vs not. Is a good starting point.
Anything in particular I should look for in the logs?
Hi @jamesrsingleton, I think we can look for patterns. For example, we pick a popular page that has frequent visits, enough to find the cache logs and revalidation.
Say you redeployed today at 9 am, the cache resets here, from then onwards we see if this page was revalidated without any reason, and see how many times was it cache HIT vs cache MISS.
I’m not sure if the Axiom logs have it but that’s one way I can see narrowing down.
So I am unable to see the paths under observability in order to see what paths are being hit. Our last production deploy was 2 days ago, we started to implement a canary/development branch that we would merge our changes into and then when enough changes have been merged in, we would then merge that into main.
So seeing as our last production deploy was 2 days ago, Oct 4, 2025 at 8:35:23 AM MST to be exact… We are seeing 460 writes and 273 reads for /[slug] page. Now in the last 2 days, we have had 2 articles published, both yesterday actually at the same time, well maybe like 20 minutes apart. I took a look at a random path /2024-2025-dii-head-coaching-changes which hasn’t had a change since 6/25/2025 and looking at Axiom, it has requests for this article almost daily.
However, looking at the two
GET requests for today, the request.vercelCache is MISS for both of them.
Those two GET requests were made 7 hours apart from each other… Looking at the log output from Vercel it says the ISR cache was updated.











