Next.js ISR revalidation propagation in shared components causing excessive serverless executions

Hey everyone,

I’m currently building pokemonstats.com and ran into an interesting architectural “gotcha” regarding ISR and shared components that I wanted to double-check with the community.

Context

I’m generating 1,350+ static pages (one for every Pokémon) using generateStaticParams. The goal is for these to be largely static (force-cache) since Pokémon stats rarely change.

The Problem

I have a shared <Hero /> component on every page that fetches my GitHub repo stars. Since the star count doesn’t change that often, I added a revalidation time to that specific fetch of 1 week.

Everything was good, until I got a message that I was about to reach the limits of my free tier because of ISR reads/writes and thus, Fast Origin Transfer. I did noticed a spike on my recent traffic on the site, but still it felt unrealistic for this page to reach the free tier.

Looking at the build logs, I saw that the revalidate column sets for 1w on every static route:

I realized that every page shares the hero component that fetches the GitHub star count and that revalidate might propagate to every route. Even though the data (stars) is the same for every route, Next.js treats /pokemon/bulbasaur and /pokemon/mewtwo as separate “islands”.

Questions

Does this mean that after one week, I wouldn’t just be making 1 request to update the stars? Would I potentially be triggering 1,350 separate Serverless Function executions and ISR writes—one for every single Pokémon page visited—just to update that one shared number in the UI?

Just wanted to double check if my assumption is correct before thinking of upgrading to Vercel’s Pro plan. I really love this project and I’m eager to think of a better solution to avoid wasting resources unnecessarily.

Spot on! Your assumption is exactly right. :smiley:

Since that Hero component with revalidate is shared across your generateStaticParams pages, each one of those 1,350+ pages effectively “inherits” its own ISR schedule.

This means that after the 1-week window, a visit to any page will kick off a background regeneration. If your traffic hits all those pages, you’re looking at a huge spike in serverless executions which is definitely why you’re seeing those limits pop up!

A couple ways we can optimize this:

Fetch client-side: This is the most straightforward path. If you move the GitHub stars fetch to a client-side call in your Hero component, your pages stay purely static and you stop the ISR “cascade.”

Create a dedicated API route: You can wrap the fetch in a route like /api/github-stars and handle the caching there. This keeps the logic centralized:

export async function GET() {
  const stars = await fetch('https://api.github.com/repos/...');
  return Response.json(data, {
    headers: {
      'Cache-Control': 's-maxage=604800, stale-while-revalidate'
    }
  });
}

Hope that helps! Let us know how you get on!

1 Like

I was thinking of going client side or creating a dedicated landing page for this hero section. I’m not a big fan of client fetching on stuff that doesn’t change often, but haven’t thought of that API apporach you mentioned, guess I’ll try that one instead. I appreciate the help @pawlean :raising_hands:t2:

1 Like

No worries, Juan! :smiley:

Come back with updates - excited to hear how you get on :slight_smile: