V0 From Best to Trash in Just 2 Weeks. What happened?

I’m writing to express my deep frustration with the consistently nonsensical garbage outputs I’m receiving from V0-1.5-md and V0-1.5-lg. Could you please let me know whether a fix is forthcoming, or whether I should request a refund for my remaining balance?

Best Regards

You need to share your App with prompts so people can look at / see what is being processed / generated.

1 Like

Hard to tell without looking at your project, but the main reason output quality goes down is when there’s too much irrelevant context being sent along to the LLM

  • Large chat histories reduce the amount of the project that can be sent along
  • Large files make it harder to pinpoint what file contents need to be sent along
1 Like

I cant say how it was before, but Im using it for the last 4-5 days and wouldn’t recommend it to anyone. the first 10-15 prompts will be fine, but as soon you start adding features or changes, it will start to remove previously implemented stuff and code.

As a example, there was 2 pages, active tool I was improving and a tutorial page. after updating the tool page it decided that the tutorial page was not needed and removed all the references and links to it without mentioning anything in chat. since the links were at the top/bottom of the page, I noticed only 3 versions later, which means alot of back/forth to get what you want

I’ve been using v0.dev for the past couple of months, and at first, it was an absolute game changer. The model’s responses were sharp, and I was genuinely impressed so much so that I considered writing a LinkedIn post praising how effective and productive the tool was. It used to solve issues in just a couple of iterations, and I managed to build some solid tools using it.

However, over the past week, things have taken a noticeable turn for the worse. Even the larger models now struggle with basic tasks in one case, failing to fix a simple broken link after 10 attempts. Sometimes it fixes one small thing but breaks the rest of the working code in the process.

There are clearly some serious regressions in the models. Once these issues are addressed, I’d be more than happy to pay for the service and recommend it to my network but for now, it’s hard to rely on it with confidence.

1 Like

The same thing is happening to me. To be straight to the point, it seems like the last updates made the models a bit less intelligent.. or, if you’ll pardon me, a bit stupid.

My team and I initially didn’t mind the new pricing, but as we go further, it’s starting to feel a lot more expensive due to the model’s mistakes and failures.

Just saying… We still plan to stick with v0 for a while, but we wanted to express our current dissatisfaction, just in case it helps provide some clarity to v0’s team.

Let’s keep it rolling…