Response to Max Mleiter, Pricing For The People

v0 is completely different from Lovable and Bolt they’re barley even comparable. And don’t get me started on Firebase Studio, it is just a mess. Yeah, I complain about v0’s pricing too, but they’ve been upfront about their capabilities and limitations.

What we shouldn’t be doing is constantly begging and whining to revert their pricing model that’s just not how the real world works. Business is about rolling with the punches and adapting to change. If you’re running a company, you better learn that lesson fast because the market doesn’t care about your complaints. Best we can do is ask for better responses from v0 and help make this software better for the sake of our own companies / projects.

2 Likes

Watching this space for further updates regarding the suggested improvements moving forward with the new pricing model.

We need the clear view updates regarding pur usage DURING chats, not an unvailing of aftermath, id feel more conformable if i knew my usage there n then per response rathet than hoing back and fourth to check.

and :100: you guys need to make sure we are not paying extortionate amounts for errors, thats something v0 needs to suck up.

2 Likes

I have no idea how you guys are using v0 that are creating this insane token usage. You would likely experience similar charges with the OpenAI API if you fill the context window to the brim.

I’ve been using v0 for months, almost daily, to support my work. It makes sense that Vercel doesn’t want to subsidize our token usage and it’s reasonable to switch to a token based model.

From my usage (with the md model, which we had before - unfair to use lg as a comparison now) I’m at a median of ~0,04$ a message.

In the end the pricing change hasn’t affected our company much at all, but we’re using it as support in coding, not to build our entire business, which imo is beyond reasonable capability of any model right now.

However the are still a few things that need to be fixed:

  • There’s just a lot of errors v0 makes. Offloading these costs to the users disincentivizes Vercel to solve this. My suggestion: When using the feedback button and reporting a bad response give the user the option to reset to previous version and reimburse the tokens.
  • I have obviously not encountered messages costing $1+ but I would be negatively surprised if I was billed that amount and would want to know an estimation/boundary ahead of the query. Prompt the user if they are willing to spend multiple $s on a single request. If you get charged 7$ four times in a row without noticing, your credits are gone. I would certainly want to get notified about this.
  • Having usage-based and plan based (minimum) pricing at the same time. What is your rationale behind gate-keeping features of v0 behind pro subscriptions? Since the users are paying for tokens / full usage based, all features should be accessible.
    Also, if I have to buy additional tokens if I use it heavily, it seems unfair that remaining tokens at the end of a billing period will be removed. This is not fully usage based, you’re letting the hardcore users pay for their usage (fair) and at the same time keep the profit on everyone who underuses it (debatable).

And Jesus, keep it together guys. The amount of ChatGPT generated pseudo-intellectual debate answers and exaggerations that have been posted here recently are not giving the impression that we’re the kind of users any business even would want to have. If it’s so bad and you’re all threatening to leave, just do it. For my part I still get good value out of v0.

3 Likes

All right, Agree in most parts. But how can you keep working/trusting with a plataform that suddenly change their prices in ONE HUNDRED times?

What are you doing to spend 2,000$ a month on v0?

I’m not even maxing out my 30$ pro subscription and I use it daily at work.

Either you’re encountering massive charging bugs that I’ve never seen before, or you are doing something wild with your prompts that leads to outrageous token costs. If that’s the case you guys are the problem why they had to change to usage based pricing, because it needs 200 users only using half their credits to subsidize one who uses 100x the 20$.

1 Like

This is exactly what we are asking for, but people that are arguing about this pricing are the people using v0 to build entire platforms, then fit about having to pay for it.

I just want clear view on my usage with proper meters and direct / linked chat reaponse sto my invoices (not just the word message).

that way i can see what prompts are geberating less bug free tokens. Currently it feels like russian roulette.

3 Likes

And the support is actively evading hard questions, and it seems even more actively hiding new posts about complains/questions about pricing. This is a concerning development, and more indicators that things are going downhill hard from now on.

Just the same standard reply without any attention to the issue or customers concerns. “Here is a prompting guide, because that will fix all the internal v0 bugs and plain errors.” - Weak effort

My 2 cents:

1 Like

We’re happy to answer hard questions about errors, bugs, and features. Repeatedly raising the same concerns about pricing is not a hard question. The answer has been given already:

If you have a specific question, please ask it. Sincere discussions are welcome :slightly_smiling_face:

1 Like

Bernd, let’s deal with facts—and with public metrics—beyond the popcorn.

  1. “Stopping power-user subsidies improved our profit margin.”
    • Right after the token-based pricing switch, v0’s weekly traffic fell about 71 % (≈1.3 M → 400 k visits, per Similarweb screenshots circulating in the forum).
    • Fewer visits → fewer generations → less upsell into the rest of Vercel. Unit margin may rise, but the revenue base is shrinking hard.

  2. “Only 1 % complain; 99 % stay.”
    • The “How many people canceled?” thread is full of explicit cancellations; the top-voted comment starts with “I canceled immediately.”
    • Another thread (“New pricing became impractical”) documents a “mass cancellation,” plus similar reports on Reddit and Discord.
    • Real usage examples show US$ 30 turning into ~20 messages, 25 % of which failed—people are requesting refunds in support tickets.
    These are not 1 % outliers; they’re the vocal portion of a larger silent churn already visible in traffic.

  3. “99 % use it as before, only 10 % top up.”
    • If that were true, traffic wouldn’t have plunged and we wouldn’t see coordinated migrations to Cursor, Lovable, etc.
    • Cursor charges the same US$ 20/month but still offers unlimited completions, solving the iteration tax v0 just introduced.

  4. Cost competition is brutal.
    • At DeepSeek, 1 M output tokens cost ~US$ 1.10; on v0 the same volume costs US$ 7.50–37.50 (model-dependent)—up to 34× more.
    • Competitors are using that delta in their marketing to poach v0 users.

  5. A realistic board-room slide now includes:
    • ~70 % drop in organic traffic inside a week.
    • Support load spike from billing confusion and refund requests.
    • Customer-acquisition cost rising because former advocates turned detractors.
    • Projected 50 % active-user churn → US$ 12 M–14 M ARR at risk (even with ARPU still at US$ 20).

Bottom line
Token pricing didn’t just cut “subsidies”; it undermined v0’s flywheel—frictionless, rapid iteration. Traffic, cancellations, and migration threads show the market isn’t laughing. Until Vercel adds a free iteration buffer or an unlimited tier (even at a higher price), the exodus will continue. This is not a “1 % drama”; it’s retention in free fall…

2 Likes

@natannpr-gmailcom Much of what you posted is conjecture. But I appreciate you sharing your concerns! Please rest easy knowing that a necessary change would be made if metrics were in free fall.

I understand that you may be bothered by the switch to usage-based pricing. The great thing is you’re not under a lengthy contract with v0. Please feel free to switch if you prefer to use one of the other LLMs you mentioned. I want your projects to be successful, even if Vercel or v0 isn’t the right fit for your situation.

1 Like

Absolutely agree — this new pricing model is diabolical. The tiered small/medium/large system is a complete black box. What exactly am I paying for? Why is one prompt 18 cents and another nearly 4x that, with zero clarity on why? It’s a joke.

And on top of that, v0 still keeps generating flawed outputs, forcing us to redo the same steps multiple times — and we’re the ones paying for it. How is it remotely acceptable that we’re burning through credits because the system can’t get things right? That’s not a premium service, that’s a broken product with a paywall slapped on top.

This isn’t just unfair — it’s a complete failure of transparency and user respect. If credits are being wasted due to system errors, we should be automatically refunded. I’m in the same boat: had to buy extra credits just to fight through v0’s mistakes.

This entire rollout needs to be scrapped and rethought. Until then, I fully support credit refunds for every wasted, repeated generation caused by the platform itself. Enough is enough.

1 Like