Optimizing high Next.js ISR write usage and reducing Vercel costs

Hi team,

I’m currently seeing very high usage under ISR Writes (106M) on my project, and I’m trying to understand what could be causing this and how to reduce it.

Usage Breakdown

  • Edge Requests: 41M
  • Function Invocations: 45M
  • ISR Writes: 106M
  • ISR Reads: 30M
  • Image Transformations: 390K

Environment

  • Framework: Next.js (App Router)
  • Feature: ISR with revalidate config
  • Content: Dynamic property listing pages
  • Scale: Around 15k+ sitemap URLs
  • Data Source: Frequent content updates from backend

Questions

  1. What typically causes ISR writes to grow this high?
  2. Could frequent revalidation or bot traffic be triggering excessive regenerations?
  3. Is it better to move certain pages to static generation instead?
  4. Would switching some dynamic routes to edge caching help?

I’d appreciate any guidance on debugging or optimizing ISR usage.

Thanks!

With 106M ISR writes vs 30M ISR reads, your write-to-read ratio is unusually high (about 3.5:1). This typically indicates one or more of these issues:

  1. Very short revalidation intervals – If your revalidate value is too low (e.g., 10-30 seconds), pages regenerate frequently even if content hasn’t changed

  2. Excessive on-demand revalidation calls – If your backend triggers revalidatePath() or revalidateTag() too aggressively on every content update

  3. Bot/crawler traffic triggering regenerations – Bots hitting many unique URLs can cause cache misses that trigger new page generations

  4. Large number of unique paths – With 15k+ sitemap URLs, even moderate traffic patterns can lead to significant write volume

Recommendations to Reduce ISR Writes

  • Increase your revalidate interval: If your content doesn’t need minute-by-minute freshness, consider increasing from something like revalidate = 60 to revalidate = 3600 (1 hour) or higher. This alone can dramatically reduce writes.

  • Use on-demand revalidation strategically: Instead of relying solely on time-based revalidation, use revalidatePath() or revalidateTag() only when content actually changes. This lets you set longer time-based intervals as a fallback.

  • Audit your revalidation triggers: If your backend sends webhooks on every CMS update, ensure you’re only revalidating the specific paths that changed, not entire route segments.

  • Consider static generation for stable content: Pages that rarely change (like archived listings) could use revalidate = false to generate once and never regenerate.

  • Batch revalidations: If multiple content items update simultaneously, use tag-based revalidation to invalidate related content in one operation rather than many individual path revalidations.

Quick Wins to Investigate

  1. Check your revalidate values across your property listing pages – are they all using the same low interval?

  2. Review any API routes or server actions calling revalidatePath() – are they being triggered more often than needed?

  3. Look at your analytics for bot traffic patterns that might be hitting many unique listing URLs

thank you @swarnava

1 Like