Updated v0 pricing

Hey all, I am on the v0 team at Vercel.

First – thank you all for your feedback; we understand your frustration, but I want to let you know we will not be reverting this change.

Our inference costs are based on input and output tokens. Trying to normalize to a single “token” or “message” shared across input and output is very difficult. To get around that, our competitors are just jacking up their pricing to .25c+ per message regardless of the request and response size (as a comparison, the median message on v0 today cost $.08 – more than 3x less). Those inflated per-message prices seem unfair to us.

We decided the most transparent, future-proof, reliable solution is to take inspiration from how frontier model labs charge for on-demand inference… with a credit pool and incremental burndown based on consumption. That’s what we’ve done with the new pricing model.

Since introducing the new pricing, we’ve added an option to toggle between different models. A v0-small model that has meaningfully cheaper token prices will be coming in the near future. We are shipping fixes to bugs and issues we’re hearing about in the community. We’re also working on an “unlimited” plan that allows close to unlimited generations for a (high) price. We are committed to being transparent about what things cost, and giving you the information you need to reduce/optimize your costs.

If you are encountering infinite loops or cases where you feel v0 is behaving worse than before, please let us know in a separate thread with a link to the chat and we’ll take a look

5 Likes