Hi, Jabber! Welcome to the Vercel Community.
Yes, tokens are consumed for every AI request regardless of the quality of the response. When you send a prompt to v0, tokens are counted for both your input (the prompt) and the AI’s output (the response), even if that response doesn’t meet your expectations.
I understand your frustration with the new token-based pricing. The v0 token system is designed to provide more granular usage tracking, but I recognize this change impacts how you use the platform.
We appreciate the feedback! Feel free to drop it in the main thread as well, if you’d like: