[▲ Vercel Community](/) · [Categories](/categories) · [Latest](/latest) · [Top](/top) · [Live](/live)
[Help](/c/help/9)
# Massive Slowdown & 4× Billing Increase Starting Exactly on New Billing Cycle
214 views · 11 likes · 28 posts
Ahmedghribstranger (@ahmedghribstranger) · 2025-12-07
Hi everyone,
I’m seeing a sudden, extreme degradation in Vercel Function performance that began **exactly at the start of my new billing cycle (Nov 21)**. Nothing changed in my code or infrastructure, yet function duration increased by **10×–40×**, resulting in **132.52 GB-Hours billed this cycle vs 30.35 GB-Hours last cycle**.
I am sharing the complete data below so Vercel staff or the community can help identify whether this is a Vercel-side issue.
---
# 🧩 **1. Summary of the Issue**
* **All my API routes suddenly became 10–40× slower starting Nov 21.**
* **No code changes**, no new queries, no new dependencies, no traffic spike.
* The slowdown applies to *every* Node.js function except one.
* As a result, function duration billing increased from **$0 → $23.94**, even though usage is lower than the previous cycle.
* MongoDB performance logs show normal latency.
* This strongly suggests a Vercel platform regression or environment change.
---
# 🧩 **2. Route-Level Performance Comparison (Before vs After Nov 21)**
## **📆 Nov 21 → Dec 7 (current billing cycle)**
*(These durations are extremely abnormal for my app)*
| | Route | | Invocations | | GB-Hours | | P75 Duration | | Error Rate | |
|---|
| |----|----|----|----|----| |
| | `/api/messages/unread-count` | | 32K | | **53.78** | | **11s** | | 52.8% | |
| | `/api/connection-requests/received` | | 33K | | **15.23** | | 2.53s | | 0% | |
| | `/api/notifications/unread-count` | | 33K | | **14.23** | | 2.4s | | 0% | |
| | `/api/study-posts/[id]/increment-view` | | 39K | | **9.78** | | 1.78s | | 0% | |
| | `/api/ad/random` | | 39K | | **6.71** | | 1.25s | | 0% | |
| | `/api/study-posts/search` | | 13K | | **5.87** | | 2.56s | | 0% | |
| | `/api/tags/recommended` | | 10K | | **5.01** | | 2.65s | | 0% | |
| | `/api/users/connected-user` | | 9K | | **4.33** | | 2.73s | | 0% | |
| | `/api/stream/webhook` | | 17K | | **4.2** | | 1.27s | | 0.1% | |
| | `/api/sale-items/relevant` | | 11K | | **2.36** | | 1.68s | | 0% | |
---
## **📆 Nov 5 → Nov 20 (previous period — SAME CODE)**
*(All functions were extremely fast and cheap)*
| | Route | | Invocations | | GB-Hours | | P75 Duration | | Error Rate | |
|---|
| |----|----|----|----|----| |
| | `/api/messages/unread-count` | | 46K | | **6.64** | | **69ms** | | 4.3% | |
| | `/api/connection-requests/received` | | 47K | | **1.93** | | 64ms | | 0% | |
| | `/api/notifications/unread-count` | | 47K | | **1.66** | | 40ms | | 0% | |
| | `/api/ad/random` | | 65K | | **1.18** | | 18ms | | 0% | |
| | `/api/study-posts/search` | | 18K | | **1.05** | | 153ms | | 0% | |
| | `/api/tags/recommended` | | 14K | | **1.03** | | 175ms | | 0% | |
| | `/api/users/connected-user` | | 13K | | **1** | | 196ms | | 0% | |
| | `/api/study-posts/[id]/increment-view` | | 58K | | **0.67** | | 20ms | | 0% | |
| | `/api/sale-items/relevant` | | 12K | | **0.47** | | 151ms | | 0% | |
| | `/api/feedback-images` | | 8.4K | | **0.41** | | 150ms | | 0% | |
---
# 🧩 **3. Billing Comparison**
## **💸 Current Billing Cycle**
* **Function Duration:** 132.52 GB-Hours → **$23.94**
* **Function Invocations:** 311.67K → $0.60
* **Build Minutes:** $0.41
* **Observability base fee:** $4.67
* **Total:** $5.03 after $20 credit
---
## **💸 Previous Billing Cycle**
* **Function Duration:** 30.35 GB-Hours → **$0.00**
* **Function Invocations:** 870.66K → $0.00
* **Everything free under Pro plan credit**
* **Total owed: $0**
---
# 🧩 **4. Why This Appears to Be a Vercel Issue**
### ✔ No code changes between cycles.
### ✔ DB latency from MongoDB Atlas remains normal.
### ✔ Functions that were consistently fast (20–150ms) suddenly take 1–11 seconds.
### ✔ Slowdown affects **all** functions except one → unlikely to be DB-related.
### ✔ Slowdown started **exactly at billing cycle boundary (Nov 21 @ 9am)**.
### ✔ GB-Hours increased **4.3×** while traffic is **lower**.
This strongly suggests:
* A runtime regression
* A change in function scheduling or compute allocation
* Unexpected throttling
* A bug in duration metering
* A change in Fluid Compute or cold start behavior
---
# 🧩 **5. What I’ve Done Temporarily**
To avoid runaway billing while debugging, I set:
```
export const maxDuration = 1;
```
on several endpoints to prevent long-hanging requests from inflating GB-Hours.
I may temporarily move API routes to a standalone Node server until I understand what’s going on.
---
# ❓ **6. My Questions for Vercel / Community**
### 1. Did Vercel make any changes to Node.js function environments around Nov 21?
### 2. Are other users experiencing 10×–40× function slowdowns this billing cycle?
### 3. Could this be a regression in Fluid Compute or shared runtime allocation?
### 4. Is there a known issue where function duration is being **overcounted**?
### 5. How can I ensure my project isn’t being placed on degraded compute nodes?
### 6. Why do functions returning a single Mongo query suddenly take 2–11 seconds?
---
# 🙏 Thank You
This issue has a huge impact on my costs and app performance, so any insight from the Vercel team or other developers would be greatly appreciated.
Stella (@kodingdev) · 2025-12-21
Also experiencing this, any updates would be lovely to hear!!!
burlinolle (@burlinolle) · 2025-12-23
My bills got 4-5 times bigger over night without change of code or traffic. Been trying to get an honest explanation from support without success. Switching to fluid compute won’t make a difference in my case. Migrated my most trafficked sites to other providers in the mean time. Its absurd!
Freek (@freekboon) · 2026-02-11
Same issues here. Function duration has increased most but other services also show an increase. And also unable to get any answers…
Anshuman Bhardwaj (@anshumanb) · 2026-02-11
Hi there, could you share the project IDs where you experienced this issue? Also, what Next.js version are you using?
Freek (@freekboon) · 2026-02-11 · ♥ 1
@ahmedghribstranger I just bumped into [this](https://nextjs.org/blog/next-16#core-features--architecture) (read the “trade-off”) in the release notes. Are you running Next v16.x.x?
Freek (@freekboon) · 2026-02-11
The most dramatic increase in on this project: `prj_721WXjNMscnCUbWqFeU9q6tn1vaC`. It’s running Next v16.1.5
Anshuman Bhardwaj (@anshumanb) · 2026-02-11
Thanks for sharing. Glad you pointed out the tradeoff section from the Next 16 release notes. It might be related but we have to see what is causing your issue.
When did you upgrade to v16 in the project you shared? Does the increase in cost coincide with the update?
Freek (@freekboon) · 2026-02-11 · ♥ 1
Yes, they strongly coincide. Not just in the project mentioned but in all other projects as well.
Ahmedghribstranger (@ahmedghribstranger) · 2026-02-11
Hey there,
Not really. I'm using Next 15. And literally I changed nothing when this happened.
Anyway I simply moved to AWS
Anshuman Bhardwaj (@anshumanb) · 2026-02-11
I see. Thanks for sharing that. Let me dig into this with our team and see how I can help.
Freek (@freekboon) · 2026-02-11 · ♥ 1
Correction: We've updated from 15 to 16 on the 28th of November but the most significant increase is on the 5th of December. On that day we only patched v16.0.5 to v16.0.7 because of the CVE-2025-55182 vulnerability.
Thank you for looking into it!
Anshuman Bhardwaj (@anshumanb) · 2026-02-11
Got it. When was the switch from 16.0.7 to 16.15?
Freek (@freekboon) · 2026-02-11 · ♥ 1
Assuming you mean v16.1.5 and not v16.15, that was on 27th of January. The timeline:
| | date | | version | |
|---|
| | --- | --- | |
| | 28/11 | | 15.5.4 -> 16.0.5 | |
| | 05/12 | | 16.0.5 -> 16.0.7 (increase in usage) | |
| | 12/12 | | 16.0.7 -> 16.0.10 | |
| | 05/01 | | 16.0.10 -> 16.1.1 | |
| | 27/01 | | 16.1.1 -> 16.1.6 | |
Anshuman Bhardwaj (@anshumanb) · 2026-02-12
Thanks for correcting that and this list is quite helpful.
I notice you aren't using Fluid compute, have you tried that? It may reduce the Function usage bill as we only charge for active CPU there.
I'm still working with Next.js team and support to see how else we can fix this.
Freek (@freekboon) · 2026-02-17 · ♥ 1
Any updates on this?
Fluid compute might negate the increased costs but doesn't explain it. I find it difficult to accept the increase when it's not caused by increased traffic, builds or functionality.
And to be honest, it's getting so ridiculous that I'm inclined to agree with Ahmedghribstranger and just move to another provider.

Graph showing last 12 months per week, during which there is no change in traffic, number of pages, caching, functionality.
Anshuman Bhardwaj (@anshumanb) · 2026-02-17
Hi @freekboon, thanks for sharing this information. It is taking longer because earlier it was confused for another issue, which was solved by enabling caching and fluid.
I'm still checking with the team. I'll post an update here when I have.
Zack Tanner (@zacktanner) · 2026-02-17 · ♥ 3
Hey @freekboon – I just took a look at the project you reported and I think you might be running into [this bug](https://github.com/vercel/next.js/issues/87090) (specifically your `[…segments]` routes). It looks like there’s a routing bug that causes these to lead to cache MISSes. I’m currently investigating the cause of this and will be working on a fix, but in the meantime, you could attempt to rename those files to a different value and see if it alleviates. I’ll update the above referenced GitHub issue once the underlying bug is fixed.
Freek (@freekboon) · 2026-02-18
Thanks for the update and I will look into it. But we are not using Cache Components in this project. Is it possible for this bug to occur without using Cache Components?
Zack Tanner (@zacktanner) · 2026-02-18 · ♥ 3
This would happen regardless of using cache components. A fix has landed on canary and will go out in our next stable release.
Swarnava Sengupta (@swarnava) · 2026-02-19
We have released **[v16.2.0-canary.51](https://github.com/vercel/next.js/releases/tag/v16.2.0-canary.51)** which should improve the situation. Let us know how it goes.
Freek (@freekboon) · 2026-02-19
We've release a patch (renaming the `[...segments]`) yesterday afternoon (12:37 UTC) and are monitoring the usage. Right now it's to early to tell if the difference is significant.
Freek (@freekboon) · 2026-02-23
I'm afraid renaming the `[...segments]` folder to `[...pageSegments]` **hasn't** fixed the issue. Fix was deployed midday on the 18'th, see the red arrow in screenshot below.
We're hesitant in upgrading to a canary release as this is a mayor customer of ours.
Can you advise us in any next steps?

Freek (@freekboon) · 2026-02-26
@swarnava , @zacktanner and @anshumanb , are there any updates on this? We have another customer that has an upgrade to the app router pending but we're reluctant to proceed because of these increase in costs.
Anshuman Bhardwaj (@anshumanb) · 2026-02-26
Hi @freekboon, sorry for the delayed response. Since, you can't deploy the canary release right now, I can only recommend waiting till the next minor release. Let me check with team once again about this.
Anshuman Bhardwaj (@anshumanb) · 2026-02-27
Hi @freekboon, I found another correlation in data that we weren't looking at initially because we investigated the Next.js issues.
So, in addition to your Next.js version changing, something else happened with this project: AI Agents discovered it.
In last 90 days of data, the AI agent activity has been responsible for >6x function duration usage compared to the actual human visitors.
I'd recommend you to visit the Observability dashboard and filter down some of the AI Agents (specifically Claudebot and gptbot). Facebook crawler is another culprit here.
Please review whether you need these bot traffic and then set rules using Vercel Firewall.
Give this a try for a couple of weeks and let me know how it goes. :crossed_fingers:
Freek (@freekboon) · 2026-03-02
I find it a very unsatisfactory answer and also very prematurely to mark this thread as resolved. It is **not** resolved by renaming the `[...segments]` to `[...pageSegments]`.
Are AI agents discovering **all** our clients in roughly the same week **except** the one client that still runs on the Pages router? Would be too coincidental and highly unlikely.
We are currently looking into alternatives to Vercel. It as absolutely unacceptable to increase billing threefold and not have any answers for weeks, months even. These costs might not be significant to a company as big as Vercel but they are very significant for our clients and we cannot endorse Vercel as a reliable platform anymore.
Anshuman Bhardwaj (@anshumanb) · 2026-03-02
I understand your point of view. But, as I said, you can verify the data points I shared in your Observability dashboard.
About the response delays:
> As shared previously, your issue was paired with another Next.js related debugging our team was leading hence took longer. Thanks for this post, we discovered and patched a bug in another part of the app. I still feel I should have checked the Observability logs first, but as any human I fell for the recency bias and missed it. Moving forward I would personally make it SOP to check the Observability section first.
I've updated the solution to my answer because the Next.js solution didn't fix your issue. I would still back my findings about AI bot traffic. I hope you do give it a try and see if the situation improves, at least for the time till you move out of Vercel (*not that I want that*). But, in the end my goal here to help the Community in whatever way I can.