I’m done giving this tool the benefit of the doubt. It honestly feels like V0.dev is deliberately designed to burn through credits while delivering nothing of real value.
In the past, spending $20–30 per month gave me decent results. Yesterday I spent $40 in a single day and got absolute garbage. Most of my instructions were ignored — and even worse, the ones specifically meant to optimize costs and reduce pointless “work” were blatantly disregarded.
Today, after 10 iterations, the model still failed to simply change button text color to white. Seriously? $10–12 just to fail at changing a font color? That’s beyond frustrating — it’s a scam.
And the cherry on top: I even tried to request a refund through their official form. The moment you select the invoice, the Submit button is conveniently greyed out. Pure coincidence, right?
I’m preparing to file a chargeback for the charges, because in my view the service hasn’t been delivered properly. And honestly, I think everyone should start doing the same and looking for alternatives. Only a real hit to revenue will ever push them toward customer-friendly changes.
If you notice the model going off the rails you can hit the Stop button at any time. Our design mode tools are a faster deterministic way to change text colors than prompting the LLM
We are working hard at improving the quality and power of v0 but language models are non deterministic tools with variable results. If something in context is preventing it from working the first time, repeating the same process 9 more times is unlikely to provide a huge improvement, so I recommend forking into a new chat to minimize contextual distractions
I totally agree, this new update is terrible, I worked on a project a month ago, it was perfectly fine, now that there’s agent mode or whatever its called, I can’t even get the preview to work, on something as basic as environment variables, the sad part is that while you’re prompting to solve the problem, you are burning through credits. Its as if vercel spent so much work trying to upgrade something that clearly worked beautifully and went ahead and pushed the worst update thus far, which is sad to say given that I’ve had such a good experience with v0 compared to other tools out there
Same exact experience here, this was a really good tool that I recommended to my entire team after using it. It requires much more work, prompts and and fixing to get a semi decent result, things that were resolved in a single prompt now take way more prompts to fix (and much more money spent). I also noticed, that the model keeps “thinking” yet no code is being written, nothing changes and it just keeps going and going.
100% agree, multiple days trying, credits burned, and the results? Garbage code, broken websites, and wasted time. Since the refund request is disabled in the request form, I’ll let AMEX handle the dispute.
is this serious ? ,i paid 20$ two days ago,and now i got a mail asking to to buy more credit as mine is 0 !!! how is that possible that my credit ran out in 2days!
It’s so sad to see a tool go down so fast. Same experience here.
Seeing more of their recent changes I can see that their agent mode is really an embedded part now of everything they are building so we are basically stuck with this as they either try to power forward hoping to get it working, or get brave enough to hit a hard reset and lose a ton of work.
It’s so frustrating - and communication from the team is basically pretend nothing is wrong - like hello we are actually using this you know.
Yeah, burning through $20 in 2 days is brutal. I’ve seen this happen when people iterate on complex features - the tokens add up crazy fast.
One approach I’ve used is to build the core app in v0, then drop in pre-built components for the heavy stuff. Saves a ton on credits since you’re not re-prompting the AI about messaging protocols every iteration.
What kind of features were eating up your credits? Sometimes there are ways to optimize the prompting.
@richardeschloss v0 is an AI coding tool. As such, it is inherently prone to making some mistakes by its very nature. The team aims to minimize that risk as much as possible and iterate toward the most reliable code creation.
The answer to the question, “Is v0 a scam?” is actually, “No, you can get a refund at vercel.com/help if the product didn’t work like you expected.” The marked answer was posted on August 29th, but it was only marked as the answer today. I know that because I’m the person who marked it as the answer to highlight how anyone who feels cheated by v0 can get a refund.
Meanwhile, all feedback is still being collected. The v0 team is always working to improve code accuracy and token efficiency, while also fixing bugs and adding new features. If you’d like to share more about your experience, please feel free to add to an existing Feedback thread or start your own
A post in this thread was marked as the answer because the question asked by the original poster was answered. Multiple other threads are currently open for feedback and discussion. Vercel employees frequently respond and participate in these threads.
If you have issues with the support form, please share a screenshot so we can see where the problem is. Keep in mind that some situations do require human intervention, and opening a support ticket will put you in touch with the billing specialists who can help.
@richardeschloss I understand that you’re upset. Please understand that change is incremental, not instant. As has already been said, the v0 team collects user feedback and makes changes based on the experiences and suggestions shared.
If you want a refund, you need to use the support form. The form may direct you to open a billing case if your situation prevents the AI assistant from processing your requested refund automatically. Access to manually grant refunds is limited to the support team, which processes these requests as quickly as possible.
Your suggestion for automatic ticket creation is interesting, and I’ll highlight it with the team. Note that I cannot guarantee such a feature will be added, as there may be logistical issues that make it more complex to implement than either of us currently imagines.
It seems you might be disagreeing with what “error” means. The “Fix with v0” button handles detectable errors in the code and configuration. The screenshot of a runtime error was just an example of where the button can appear in the UI. Were you hoping for it to do something other than solve detectable code errors? If so, please elaborate.
It’s perfectly understandable for you to be frustrated when things don’t work as expected. Please keep in mind that while you are allowed to voice your frustration and share feedback, you are not allowed to violate the code of conduct
@amyegan the expectation I had with “Fix with v0” was to handle detectable errors in the code and configuration which it failed to do. There are ways it could do a better job of this. When given a list of requirements, v0 could write acceptance tests and run the tests against the code it just created. It’s not currently doing this by default. When asked to create a test.html file, it created it, but the tests hung. Tools like puppeteer, playwright could be useful. This way, the user can see where the assistant’s misunderstanding came from in a particular requirement.
As far as the automatic ticket creation, if that’s technically challenging, it may be advisable to instead offer an assisted ticket creation feature instead. Something where after [5?] iterations or by the time the message limit gets down to the last 3, a question asks the user if he/she needs support. This should probably be prioritized.
Finally, it might be a good idea if you or your company proactively reached out to each user on this thread to apologize for the negative experience encountered (even if you think you’re right) and try to help them. If you’ve already done so, great.
Here’s an example. I just had v0 create a test file. It ran the tests, found 2 failures, but never showed the “Fix with v0” button. Also, when I asked it to create the test file, it created a react version (app/test/page.tsx) and a native (app/test.html) version. Going to “/test” route caused an issue because test.html was trying to load the .js version. I had to rename test.html to test2.html to avoid the naming conflict to get the test to run. v0 never determined this to be the cause of the hanging tests.