We’ve been running Vercel Pro (hnd1) with Aurora Serverless v2 (ap-northeast-1) — same region, OIDC + IAM auth, the setup that Vercel recommends for Aurora on the Marketplace.
We kept seeing ~6 second delays on requests that needed a fresh database connection. Spent a few weeks digging into it and wanted to share what we found.
What we tried
We went through a lot of dead ends before finding the actual cause:
- IAM auth overhead? No — OIDC → STS → RDS Signer completes in ~350ms. Not the problem.
- Server bundle too large? We cut it from 16MB to 5.5MB by disabling SSR and stripping out MUI/date-fns from the server bundle. Made zero difference to connection time.
- Aurora ACU too low? Bumped the minimum from 0.5 to 1 ACU. CloudWatch confirmed capacity was already stable at 0.5. No change.
- Fluid Compute? Added
"fluid": truetovercel.json. The function instances stay warm, but that doesn’t help when the DB connection itself is cold. - Cron warmup? We ran
SELECT 1every 5 minutes. Totally useless — the cron ends up running on a separate instance from user requests, so the warmed-up connection never gets reused.
Where the time actually goes
We added instrumentation to every step. Here’s what a typical cold-connection request looks like:
| Step | Time |
|---|---|
| Credentials provider + Signer creation | ~1ms |
| OIDC → STS → IAM token | ~400ms |
| TCP + SSL handshake | ~5.2s |
pg_type query |
~150ms |
Actual SELECT |
~15ms |
| Total | ~6s |
In theory, if a request hits the same instance that already has a connection, queries finish in 13-32ms. But in practice, requests frequently land on a different instance — even “hot” ones — so this 5+ second handshake keeps happening.
The core problem
Vercel Function instances don’t share connection pools with each other. Even with Fluid Compute keeping instances alive, a request can land on a different instance that hasn’t connected to the DB yet. And without VPC-level connectivity (which requires Enterprise + Secure Compute), every new connection has to go through the public internet — adding ~5 seconds of handshake time.
This matters because Vercel launched Aurora PostgreSQL as a Marketplace integration in December 2025. The blog post says:
“Vercel runs on AWS infrastructure. Your functions and your database are in the same data centers, which means latency stays low.”
We followed the recommended setup exactly, and latency is not low. “Same data centers” doesn’t help when the connection goes over the public internet without any pooling layer.
What would help
A few options that would fix this for Pro plan users:
- Add a connection pooling layer for Marketplace Aurora — similar to what Neon and Supabase offer by default
- Make Secure Compute (or at least VPC Peering) available on Pro, even in a limited form
- Default to Aurora Data API in the integration — it’s HTTP-based and doesn’t need a persistent TCP connection
- If none of the above, at least document this limitation in the Marketplace Aurora setup guide
What we ended up doing
We migrated to Supabase, which has PgBouncer built in. The connection pooler eliminated the problem entirely.
We preferred the IAM-based auth setup with Aurora, but 6 second response times on production weren’t acceptable.
Environment:
- Platform: Vercel Pro, Tokyo (
hnd1) - Features: Fluid Compute enabled, TanStack Start
- Database: Aurora Serverless v2, Tokyo (
ap-northeast-1), min 1 ACU, IAM auth