What's going on with v0 Agent?

I’ve been using V0 for quite a while and had great results and got my team to use it as well, but since introducing Agent, we’ve noticed a sharp increase in hallucinations as well as cases where requests are either misinterpreted or ignored repeatedly. When asked for very basic requests, or bug fixes the agent model decides to refactor (badly) the entire project as well as and redesign it from scratch. This results in the need to keep reverting back to previous versions time and time again.

I usually leave feedback directly in the chat, but I’ve found myself (and my team members) spending significantly more time correcting these kinds of errors and misinterpretations, all while burning through tokens because the model is inefficient and ineffective. Do other members have the same experience here?

Model releases can be difficult and some optimization time is granted, but the level of hallucinations and bad output makes it quite unreliable for quick iterations and ongoing work. For example, I asked for a color palette update, the component library was switched, the app was refactored and the design was changed. None of this was necessary or something that I wanted. Furthermore when I reverted back, and asked a simple question to verify the library the was used (to give it context once again), the model went into a loop and decided to refactor and change the component library.

In the previous version, these kinds of issues were far less frequent and the output was much better overall.

Please advise,

Thanks!

2 Likes

Adding here, changes also take significantly more time which was not the case in the past. It seems that resolving requests takes minutes, and eats up the balance on the account. Simple tasks, time after time, are being misinterpreted as massive tasks to accomplish, which drags on and on.

It seems like Agent got even worse in the last couple of days, just in the past couple of hours I’ve spent around $6 trying to fix bugs and random changes that Agent has introduced, only to get to a point where I’m frustrated so I roll back to an earlier version.

I asked Agent to add a new “To-dos” table that also appears on a “Quick Actions” page, this type of task would’ve been done with ease by the previous model. Yet Agent decided to “fix” the app, delete a part of the “Quick Actions” table, delete an entire different page, and redesign the app with a different color palette. The functionality that I asked for doesn’t work, the functionality that was working got deleted, this is essentially throwing money to the bin. When I asked to fix the bugs, I get a partial implementation of the fixes on certain pages, and other pages are still broken.

Additionally, it keeps getting stuck and enter endless loops for minutes before outputting something that is simply useless.

1 Like

It’s doesn’t understand what i need, and spent my balance )))

Currently experiencing the same thing. I cannot get simple tasks done and its burning through my credits

Currently experiencing the same thing. Also had a bug that persisted even after reverting.

Seems like there is a lot of this in the community, but very little feedback from the V0/Vercel team. At this point I’m looking for an alternative to V0.

I’ve fed back to the team several times, it’s really quite extraordinary that they are currently (as of Sept 9th) doing nothing to address very serious users concerns, with nebulous responses about passing on feedback. One can only hope they do so in time. And we had this new Design System which is a complete distraction on real usability issues.

I constantly leave feedback, but there is no reply from the V0 team. Not sure why the made the decision to remove the access to the old models, but there is endless feedback here about how bad it is. Not sure whether we will get a reply if any changes are coming.

@amyegan Hi, I know that you say the team read all feedback.

May I kindly ask what is going to be done? I think people just feel in Limbo with the system, we are on paid plans and have no confidence that the system is working (as it was pre V0 App) or we are being listened to., as the is currently no feedback

I’ve raised multiple feedback about serious issues people are facing which are also being highlighted on negative Trust Pilot reviews damaging V0 reputation

If I was on the V0 team / support staff, I would be working day and night, pulling out all the stops to reply to every single piece of feedback and setup a comprehensive plan / video blog session to acknowledge all users concerns and the exact plan going forward. This product was brilliant mid August

Every day that goes by without some answers means paying customer may leave a platform they once loved.

Regards

As I’m sure you can imagine, we receive a lot of feedback about v0. Not every suggestion will directly result in the exact change requested. Instead, the team makes incremental improvements based on a more holistic view informed by all the feedback coming in. Much of it doesn’t receive any fanfare as individual changes are often quite small. Bigger changes are posted on v0.app/changelog

So the “exact plan going forward” with regard to agent/model improvements is incremental change. I can try to get you a more exact answer if you have a question about a specific feature.

I hope that makes sense!

1 Like

Hi Amy,

it’s obviously great that incremental improvements are being applied. I will do some testing for improvement

But can I ask if you and team really understand what is being highlighted about problems on the forums??

Eg. Major V0 build degradation, output quality, no choice on model selection, slower and excessive credit usage.

Every one of these items needs to be addressed & explained.

I’ve given much feedback in an articulate and respectful way, along with many others who are much less diplomatic, and we’ve been met with absolute silence on these serious issues.

This on its own is really disappointing, but the fact that V0 pre Agent was amazing makes it 10x worse because I’ve tried 6 platforms, it was the clear leader., and I have to waste time testing alternative options now.

If you and the team are unable or unwilling to issue a comprehensive response, we can only assume this is a deliberate choice while users suffer or leave the platform.

Regards

1 Like

I have been a paying user of V0 for several months and it has become progressively worse to the point of being currently borderline unusable. The following issues have become pervasive:

  1. Pointing to a design element now typically results in v0 unnecessarily reading unrelated parts of the codebase. Previously, v0 would correctly interpret that the change should be done either directly to the selected component or within the page - not anymore.
  2. Project knowledge and specific instructions provided there are regularly ignored
  3. V0 regularly decides to update parts of the app that it hasn’t been asked to update consuming copious amounts of tokens in the process
  4. Token consumption is extreme even for simple changes, I’d take a guess it could be related to worse context handling per #1

The drop in output quality is really tangible to the point of being counterproductive. I agree with @alexcumbers-9492, I see no reason for persevering with the platform if this continues.

Regards

You’ve listed some general concerns that have already been answered in documentation and other posts.

  • The shift to Agentic v0 means no more need to choose models.
  • High credit usage is often the result of broad instructions, big files, or large projects. Forcing the agent to scan every file for every command isn’t the most efficient use of resources. Instead, breaking a project into components and targeting specific changes helps manage context and reduce credit usage. And be very specific with prompts.
  • Code output will vary based on project settings and instructions provided. Setting project level instructions and strategic forking can get you more predictable results. Giving targeted instructions also helps.

More tips about how others use v0 have been shared in the community handbook. I strongly recommend reviewing the techniques that worked for other people and the v0 docs.

I’m not sure what would constitute a “comprehensive response” in your mind. If there’s a specific feature you want to know about, please let me know so I can check with the team about if/when it might be available.

2 Likes

Thanks @amyegan, unfortunately the response doesn’t answer our questions or concerns.

Like many others in this thread, I’ve been a paying member of V0 and introduced it to my team of PMs and Engineers. We’ve incorporated it to our workflow, we’ve seen great improvements in V0 over time as you delivered new features, increased model quality and more. The output that we were getting from V0 was a game changer and made our work with your product enjoyable and valuable. The point of frustration as mentioned time and time again across the forum, is that this changed once agent was introduced. What happened? Quality went down, reliability went down, and the effort to produce a design went significantly up. The product became less valuable but the price increased.

  • “The shift to Agentic v0 means no more need to choose models.” Why remove user choice? Other products (for example, Cursor) lets you pick either an Agent or a specific model. A dropdown with three options is not a UX burden, especially when it produced better results. If the goal was to simplify, why force a lower-quality default on users who relied on the previous behavior? Introduce Agent, place it as the default choice, and if a user wants to he can choose one of the previous three models.

  • “High credit usage is often the result of broad instructions, big files, or large projects. Forcing the agent to scan every file for every command isn’t the most efficient use of resources. Instead, breaking a project into components and targeting specific changes helps manage context and reduce credit usage. And be very specific with prompts.” This might be the case, the problem that we’re trying to highlight is the fact that it wasn’t like that before. We didn’t have this issue, I can run an old prompt against agent and compare the results, the previous output (running against a model) was way better, it’s not even close.
    I now need to spend significantly more time, to get mediocre results, while just a few months ago I would get amazing results in a fraction of the effort.

  • “Code output will vary based on project settings and instructions provided. Setting project level instructions and strategic forking can get you more predictable results. Giving targeted instructions also helps.” My previous point above stands here as well. If this was the default state of V0 that would’ve been a different conversation, the problem is that it wasn’t like that - it gotten worse.

Our main point of frustration is that we had an amazing product, that we used daily and generated value repeatedly. The product quality has significantly reduced to the point where it’s unusable, because we are forced to use agent that is simply not as good as the previous models. You had a great feature, it was amazing, you took it away - the users are frustrated. This is not abnormal, this is a normal response to a decision that you’ve made that doesn’t make any sense to the users. They get less value, spend more time, and pay way more money, because of a single feature.

Thanks for you response, I’d like to again point out - this is all coming from place where users had an outstanding experience with your product. The results your product produced were a head of all competitors, you were on a completely different level. This is no longer the case because of Agent, hence we’re frustrated. You gave us something amazing, and then took it away.

2 Likes

Hi @anton-4938, I went through your reply and was hoping to understand your problem better. I get that you prefer the models from before over the current Agent, and you’ve shared valid pointers but if you could share some references, like chat links where you’ve prompted something very basic and didn’t get the results you expected or some comparison data between the current and the older models in terms of credit usage, that could help them help you tbf.

Although, I do believe that users should be given the choice to choose the model, there has to be a really good reason for them to not be doing it. Maybe there’s a bug that they want to overcome before making it public or something else that they simply cannot share.

All that being said, the Agent does still perform in a really decent manner if you are a programmer and know what’s stored where. So, if anyone’s getting frustrated by the Agent, I feel it’s probably because they enjoyed the fact that with the older models they didn’t have to do much research and could get what they wanted. But trust me, it would only lead to a grand disaster if you are not familiar with the changes that the v0 AI is making to your codebase.

1 Like

You are 100% right, I feel I’ve been bashing my head against a brick wall, raising feedback to support, only to get non answers or worse, us users are doing it wrong, when we’ve all got amazing results with V0 Dev, pre mid August. It’s like everything is worse and there is complete denial on support response, as if there are no issues. I’m going to try one last time, with a build, and collate various user feedback into one final thread, to see if there is proper feedback, which now only the manager of V0 and or Vercel head of dev, can now address.

1 Like

As @riza suggested, it would be helpful if you shared chat examples. No need to make them public, I just need the IDs so I can flag them for closer review.

1 Like

Sure, I’ll collect a few chats form the team and share. It ranges from simple pages to more complex projects that went completely sideways all of the sudden.

1 Like

@amyegan can the chats remain Unlisted when sharing? I don’t see an ID parameter anywhere, are you referring to the last dash in the URL slug?