I’ve been using v0 since March last year and usually the Pro plan ($20) got me through the month while working on some projects. Lately (within the last week), the token consumption has skyrocketed. I’ve almost burned through the entire $20 limit in 2 days.
Problem
While trying to figure out why the tokens burn so fast, I looked at the usage and observed some inconsistencies where short prompts cost significantly more than complex ones.
Examples of Inconsistent Costs
Example 1: Short Prompt
“Can you make it into 4 columns on desktop”
Total Cost: $1.01
Input: $0.97
Output: $0.05
Example 2: Long Prompt
“design the Industries page to feel premium, B2B, and solution-driven… [detailed requirements]”
Total Cost: $0.36
Input: $0.30
Output: $0.06
Usage Trends
I pulled my usage from the start of January and earlier:
Previous Median Cost: ~$0.20 per prompt (with occasional peaks at $0.80)
Current Median Cost: ~$0.40 per prompt
Issue: Frequent $1.00+ charges for very small input prompts.
Can you please help and see if this is a bug, or if Claude has simply become more expensive?
I see you’re running into some weird usage costs on prompts. Sorry this has been difficult! Could you provide a bit more detail? Specifically, it would help to know what kind of prompts you’re using, the context in which they’re being executed, and any specific metrics or numbers you’ve noticed.
I can provide the usage excel. Prompts are pretty short. “Please add a product page with…” or “Redesign the section with a premium feel”. Even so given the input tokes are way cheaper than the output ones it doesnt make sense to be charged this much on the input tokens
Is it possible that because you are using pro and not mini model that v0 first gathers and delivers all context from code file and count that towards inputtokens? Since new claude update pricings for input and output went x2. So the final costs will also be x2, like your numbers are telling us.
If Anthropic bumped the price of claude, that might be reason.
But the 1$ input cost still baffles me a bit. The project is pretty small and in the context of the first example, he just reorganized 4 cards that were 2 per row in 1 row of 4. Hit 200 versions on some projects and still I wasnt charged 1 dollar per prompt.
Ah Yeah, in a other thread someone posted a picture of the input pricing overview before and after the update… pricings really just went x2. So i think thats the main problem, not sure if vercel decided to do this or if its a thing that Anthropic did. But still it sucks allot, 20$ is just gone in 10-15 prompts when using pro or max. I am only using mini since the latest update, kinda seems to do the job but you are using serious power.
We recently changed v0’s pricing to exactly match the price of tokens from Anthropic. That means longer requests and larger projects are less costly, because those tasks rely on “cached tokens” which are much cheaper. The trade off is that some simple tasks and generations will now cost more than they did previously. That’s why you’re seeing some complex prompts cost less than short ones.
We made this change based on feedback from v0 power users that the existing pricing model didn’t work well for them.