So, firstly, the new pricing system sucks. Tokens are not it. The platform was perfect beforehand, please revert this change or i’ll be cancelling my subscription at the end of the month! (There are far too many competitors to pull this stunt and expect users to just accept it and continue giving you money for an inferior product.)
Secondly, i’d just like to clarify, are my tokens still being depleted when prompting, even if the AI messes up or has a useless response? This is another thing that I’m not happy at all about with this model. I’ve experienced far too many instances of the AI getting a little bit confused or doing something totally ridiculous compared to the prompt. Now you’re potentially going to be lose tokens based on these responses? Can anyone clarify if this is the case?
Yes, tokens are consumed for every AI request regardless of the quality of the response. When you send a prompt to v0, tokens are counted for both your input (the prompt) and the AI’s output (the response), even if that response doesn’t meet your expectations.
I understand your frustration with the new token-based pricing. The v0 token system is designed to provide more granular usage tracking, but I recognize this change impacts how you use the platform.
We appreciate the feedback! Feel free to drop it in the main thread as well, if you’d like:
Well, in that case, i’m out after this month’s subscription is up.
I wouldn’t have as much of an issue with your new pricing model if this wasn’t the case, but you CANNOT ship something in a state where you can have sporadic issues with broken responses and/or having to use up multiple prompts to fix newly created errors by prompts with a model in place like this.
If every single prompt worked correctly, and there was no instances where you needed to use further prompts to fix new problems and/or regenerate responses etc, then yes, a tokenised model would be acceptable.
Imagine trying to use a platform where every single instance of interaction is charged, yet there’s potential for 50% of interactions to be bugged or require another interaction to fix an issue. Would you be happy paying for a model like this? I don’t think so.
This is a massively short-sited shift in monetization for an ‘unfinished’ product.
Pauline, i would demand and plead the real Vercel staff to make a reaponse as youre just a messeneger, no offense , but your words mean nothing , espeically when youre using Out of the box AI resposes without even reading them.
please pressure the people who matter more so , we deserve immediate action
I acknowledge your reference to the Code of Conduct. My previous post was a direct expression of the profound frustration many in the community feel regarding the recent pricing changes and the consequent service disruptions. While I singled out Pauline, it was in the context of her being the point of contact for what many perceive as inadequate ‘out-of-the-box’ responses to a crisis.
I will continue to advocate for the community and for the sake of what v0 was and what is it now, if adding mentions frays towards misconduct, then ill champion within the limitations.
Please do not distract yourself from the real issue, hinging my responses as misconduct is just an attempt to shadow the problem.
Sorry if you feel offended by my viglance for the community.