Our Response

We finally have a reaponse to the pricing update.

Hey all, I am on the v0 team at Vercel.

First – thank you all for your feedback; we understand your frustration, but I want to let you know we will not be reverting this change.

Our inference costs are based on input and output tokens. Trying to normalize to a single “token” or “message” shared across input and output is very difficult. To get around that, our competitors are just jacking up their pricing to .25c+ per message regardless of the request and response size (as a comparison, the median message on v0 today cost $.08 – more than 3x less). Those inflated per-message prices seem unfair to us.

We decided the most transparent, future-proof, reliable solution is to take inspiration from how frontier model labs charge for on-demand inference… with a credit pool and incremental burndown based on consumption. That’s what we’ve done with the new pricing model.

Since introducing the new pricing, we’ve added an option to toggle between different models. A v0-small model that has meaningfully cheaper token prices will be coming in the near future. We are shipping fixes to bugs and issues we’re hearing about in the community. We’re also working on an “unlimited” plan that allows close to unlimited generations for a (high) price. We are committed to being transparent about what things cost, and giving you the information you need to reduce/optimize your costs.

If you are encountering infinite loops or cases where you feel v0 is behaving worse than before, please let us know in a separate thread with a link to the chat and we’ll take a look

I believe the majority of us understand why the change was needed and why it wouldnt have been sustainable to keep it as it were.

However, the transition could have been handled with a bit more care to the community. We have taken the brunt of failures as a result of teething issues (an understatement).

Given the wide spread havoc of missing credits, lack of clear view of usage during chat use (not just a single page view with no actual chat ID attached to said invoices, just a date and model).

There is alot lacking in comparison to other services that use a similar pricing model, simple usage meters per chat / overall project usage would go a long way.

and what is happened in regards to v0s failures, be it deployment, stopped code, duplicated chats with empty bodies. This is also a widespread issue…

i think it would be fair to address the members affected by this transition with some form of compensation / trial period to A) bring back some trust and pontential retain their memberships B) Allow yourselves the vigorous testing you relied on the old pricing model to really vamp up the fixes, it clearly needs it

People can still vote and it’d be interesting to see if any minds change given your first response.