Agent mode feedback thread

Ah okay. So they’re purely to get some feedback on the new Agentic AI. I happened to not use it too much today but what I noticed while using it is:

  • it attempts to solve the issue in 1 message rather than me having to reiterate myself.

  • it may miscode other parts of the project which causes other error messages.

  • it chews through a lot of tokens which is okay but it has to mean the problem gets fixed without miscoding the rest of the project.

  • more intelligent and remembers better.

I’ll also add that finishing a message with “Remember to keep everything else in the project the same” works a lot of the time in Agentic AI miscoding the other parts of the project.

2 Likes

Will the option to disable agent mode and choose the model I want, including GPT-5 (as it was before the update), be reinstated? If so, when? I cannot keep sitting in this in-between state, unable to work on my project, waiting for someone from Vercel to respond to all the negative feedback on agent mode, and hoping you will actually listen and give us the option to choose again.

I do not want, and cannot wait for, a “better” version of agent mode. I simply want the ability to turn it off and use GPT-5 again - which was genuinely phenomenal. Right now, it seems like you’ve launched a product too early, with too many errors to be functional, and you are forcing paying customers to act as test subjects. If you want testing, make it opt-in - but give us the option to turn it off so we can actually get work done.

I do not appreciate being tested on as a paying user with a broken product I cannot rely on for my work. If this continues, I will have to cancel my subscription and move to another platform.

7 Likes

Ditto a lot of the comments here - agent mode is not good. I’ve immediately found myself going directly to GPT5 asking a specific problem and then copy and pasting in my code. Please revert this back or make agent mode much better. It’s almost made vercel not usable.

3 Likes

Please, guys, this change so abruptly was simply NOT it.

bring back model selection, or at least let me pay more money to get it back???

3 Likes

Like others here, I’m finding agent mode challenging to work with on existing projects.

Specific issues I’ve encountered:

  • File corruption during targeted fixes - Asked to fix a rate limit boolean check in one function, agent deleted the entire 200+ line API route file, leaving just one import statement. Had to manually restore from an earlier version.

  • Waypoint restore not functioning - When trying to roll back after breaking changes, the waypoint feature didn’t actually restore the previous state

  • Scope creep on changes - Simple requests affect unrelated files (asked for a UI update, it modified middleware error handling)

I appreciate that agent mode is trying to be more comprehensive, but for those of us with functioning projects that just need targeted fixes, the old model selection was more reliable. Even just an opt-out toggle while you refine agent mode would help us keep working…

Not trying to pile on, v0 has been great for my project overall. Just hoping this feedback helps improve the experience for everyone.

2 Likes

I’ve noticed a drop in Agentic AI’s intelligence the past 24 hours or so. When I say intelligence what I mean is how good it is at solving specific problems I ask it to.

I finally cancelled my subscription.
I cannot use it as before, so I dont Need it any more.

PLEASE CAN WE HAVE A WAY TO SWITCH MODELS, THIS AGENTIC MODE IS STRESSING ME OUT, REALLY WELL.

The agentic ai is not giving good results, like it fraustrating, please give us chance to choose models, it really delaying me

1 Like

Hey!

I’ve been using it for a week and the results were awesome, but since the new version it’s been a mess… I’ve tried 20 prompts in a row and none were satisfying. What happened?

Worst update ever. Previous models delivered design at the level of seasoned UI professional, current model - at the level of a junior (or even intern) :angry:

1 Like

@juliengobbi-5344 They introduced something called “Agentic Mode,” but it has completely broken the system. I can no longer make any file edits or add new features, it fails to understand any of my commands, cannot read the provided .php file, reads it incompletely, and when delivering, although the code is 2,000 lines, it repeatedly gets stuck around line 600. The v0.app platform is in a very poor state.

When I explain my request in detail, it not only produces results that are entirely unrelated to what I asked, but also severely damages my existing PHP code to the point that I could not recover my file. For example, if I request to add a modal—describing precisely how I want it—it ends up breaking the site header, inserting numerous unwanted elements into the center of the site, and making arbitrary, illogical, and highly disruptive revisions to the overall layout without any instruction from me. It even adds an excessive amount of unnecessary elements along the right side of the site and all the way down to the bottom. In short, it completely disrupts the site’s functionality and structure in a way that is far from what I requested.

Since no representative from the V0 team has provided any information on why this is happening or when it will be fixed, we are left surprised and waiting—hoping they will eventually explain the situation and inform us of when it will be resolved.

1 Like

Move Over, The v0 AI Agent is Taking The Wheel!: Death of the v0-1.5-sm model, squashing the free tier and the cost of autonomy

I want to echo the frustration I’ve seen regarding the forced migration to the AI Agent mode. I know this may seem like exposition, but I wanted to share my experience in case it helps others, as the situation wasn’t clear to me until after the fact.

But first, I’ve loved v0 in the early stages. The pricing models, as they evolved, seemed to be tracking well with output produced from the ai copilot experience. Even with the sm,md,lg model breakdown, the pricing seemed to align. I could invoke sm model prompts and tweak designs to a satisfying end result and it felt like I was truly working collaboratively with the AI. I was able to choose the amount of assistance needed, and the cost seemed to be compatible.

Jump to the post AI Agent experience and it feels like the dynamic has flipped. In the below example. I asked for the div background color for these badges to be modified. A fairly simple request that in the past would cost ~.05 credits, but for the AI Agent cost .39 credits an increase of 680% and nearly all of that cost (.38 credits) are input credits. Meaning a credit pool that used to allow for extensive collaboration now can disappear in just a few prompts and the $5.00 free credit plan So whats going on?

Gone is the cost effective sm model and the ability to manually select different models (ie. manage your prompt cost). With each prompt the AI agent model can choose to plan, research, build, and debug. Each of these sub actions, as well as any follow up prompts the Agent invokes on it’s own, draws from your [input] credits and is performed unchecked by the user. From an accuracy and efficiency perspective, I’ve encountered more looping conversation, particularly around debugging where the AI ultimately couldn’t fix the problem and ends up invoking extra steps to undo what it had just tried or will undo code changes that I have manually committed (at my cost). I also feel like the collaborative experience has suffered because my focus has shifted to prioritizing bulk step prompts that I’ve tested in other AI services in an attempt to maximize individual prompt payoff and reduce repeat debugging.

TRDL: In essences my experience with the v0 AI Agent has meant, increased cost, increased unpredictability for both prompt cost and output, loss of control and deceased collaborative intent. All of which has lead to increased frustration in my cost to output experience perception

So far I’ve rattled on about my experience, my own subjective observations, so lets take a look from another perspective. Here’s some numbers, this was alarming to realize. I hope that something will change.

4 Likes

This this this this this :loudspeaker::loudspeaker::loudspeaker::loudspeaker:

The performance of Agent mode is interesting, but there’s a problem. Personally, I used SM to build the base, which saved quite a lot, and MD for design and corrections — here there’s no way to choose between one or the other.

The agent makes fewer mistakes, but the quality has dropped — both in design quality and in resolutions. When it comes to tasks involving mobile design corrections, it can’t manage to keep the same template as before. Although, I do admit that the number of errors has gone down.

The cost remains similar to MD, but in some file types where I compared an earlier edit to the same edit again, it actually turned out higher, from $0.08 to $0.14.

I understand that this Agent feature is a test. But I want you to understand that this is harming many projects. I’m not managing to create a simple landing page with the agent mode. I’m having to change the code manually, and this doesn’t make sense. I wanted at least an answer from you if the agent mode will continue forever or if you will revert the situation. I need to act quickly because I have projects with tight deadlines. @jacobparis

2 Likes

There HAS to be an ability to revert back to the old v0.dev without agent mode - it’s significantly worse now, I can’t choose model, the agent keeping flipping thinking too much and overthinking and making too many small changes, and performance in the UI has gone really sluggish too!

PLEASE let us revert back to how it was - this agent sucks deeply.

OH - and there are </merged_code> issues, it keeps appending the tag at the bottom of changed files and doesnt remove it, causing project to break.

The potential here is promising, but in its current form it’s slowing me down.

Main Issues

  • Very simple messages are costing $2–$3.35, much higher than what I’m used to.

  • The agent pauses frequently without explanation.

  • Says it’s resolved issues when nothing has changed; console errors keep repeating.

  • I’m spending credits on repeated fix attempts that don’t actually fix the problem.

  • It doesn’t always follow explicit directions.

Workflow Limitation

I want to be able to edit design and copy directly — without generating a message — but that doesn’t work consistently across the app. Having to run everything through the agent slows iteration and increases costs.

Suggestions

  • Lower credit burn for low-complexity actions.

  • Improve reliability and validation of “fix” messages before marking tasks as complete.

  • Make direct editing of design and copy possible in all parts of the app.

  • Add better error handling so the same bug isn’t “fixed” multiple times without actual change.

Thank you for the $80 in credits over the past few days. I had stopped prototyping the way I used to after the May/June pricing change. The $80 in extra credits over the past few days has been a huge boost and it’s brought back my energy to build. That said, I’m worried about how quickly credit will burn if agent mode is the only option for these tasks.

Would love to hear back from the team on these issues.

Hahaha, well, it repeats the process 5 or even 8 times for me… what a horrible update the agent has!

We need other models. PLEASE CAN WE HAVE A WAY TO SWITCH MODELS, THIS AGENT MODE IS NOT READY AT ALL!!

In my personal experience I haven’t been able to solve just 1 problem over the last few days and right now I’m just hoping more updates come to fix whatever the weakness in the Agentic AI is.

The devs can revert back to the old ways, but I think there just needs to be simple my better at solving our problems and if it does that consistently nobody will care if it chews a couple more tokens here and there.

If I keep sending prompts to try solve 1 problem and Agentic AI can’t solve it I end up just being demotivated and that’s where I’m at right now.