So something is seriously wrong with v0 since last week. Up until Wednesday everything worked like a charm, the something happened with the model. It seems to randomly switch to GPT4o or Claude 3.5 which completely messes up my next.js 15 project. Even if selecting the new “large” model and paying multum for tokens it still randomly reverts back to Claude 3.5 which results in that we haven’t been able to build and ship anything in a week as we constantly need to restore older versions. Vercel is scamming their customers now, we buy a lot of credits but spend on issues created by the “lesser” models and constantly restoring. They claim their “large” model is great but it still at times run on Claude 3.5 without any knowledge of next.js 15. This is crap, you need to be more transparent, fix this Vercel!!!
Something certainly not right for me too today. Slower, less responsive, doesn’t seem to process my input before starting to edit files, preview is so slow to load. Let’s hope this is sorted soon, I had been very happy until this week.
Hallo!
I been many times frauded by the ai (unintentionally) due maby limitation of context or whatsoever, The agent creates and then stops and when i ask to continue it starts all over again and removes what he created, when I try to download i get this page error 38.
I have been many times ending up with wasted tokens without actually outcome
i have experienced this really many times
I personally see this as not my responsibility or my mistake.
I cant think of a reason that i might caused this.
What’s your current experience? I’m finding credit usage way up & more bugs.
Same exact experience here, quality is horrible compared to the previous version and you pay way more for each prompt.
Same. I am trying to download a logo, but it keeps giving me stock photos instead
I’ve been noticing similar issues lately. The random switching between models is really frustrating, especially when working on framework-specific projects like Next.js 15. It would definitely help if Vercel gave more transparency on which model is being used in real time and why. Consistency is key when teams are paying for credits and relying on stable output.I also came across another thread where people are facing the same issue: https://community.vercel.com/t/click here usage-model-change-has-ruined-my-project/12688. This shows it’s not an isolated problem, so hopefully Vercel takes note and addresses it soon.