I am find that v0 still has its ongoing need to redo the same item over and over again, then wanting us to pay credits to cover the cost of its errors.
Thats not fair. Why are we having to pay for the lack of performance of v0?
Please can this be reviewed. I had to buy extra credits after v0 ai making endless errors. Please can I have a credit.
Yes, this is very frustrating. I wasted a lot of credit on just simple errors that V0 made regarding not closing brackets but not fixing it. I think a credit is due.
I guess everyone here is experiencing this same issue… but i think that, until they release one or two more LLMs, really trained and better suited to what v0 does, we will have this kind of problem…
I already spent like $7 or more just clicking “fix with v0” or something
This has been happening to me to the point where I’m having to look elsewhere, which sucks bc I love this platform! I’ve tried contacting support many times and they never get back, so sad.
In the same boat. Look at all the issues v0.dev and v0.app caused. You are correct to say this is ridiculous payment scheme. Am I to blame or have to pay for all this coding gone wrong?
I understand the issues you are facing, but no product in the market provides the same level of UI as v0 (or as it used to before Agent). Kilo code is close with Sonnet’s API but it’ll still have bugs. Right now, every platform is only suitable for programmers looking to finish their work quickly and not for complete beginners trying to do everything from scratch. If you have a review website, give it to the platform and it’ll make something out of it.
It’s constantly creating bugs and breaking layout and functionality that was perfectly working before. I am spending 80% of my time and credits on fixing bugs. I want my money back. Let your AI spot frustrated users and give them some free credits!
But basically, you guys can have a look at all my chats in the past days, then you see how much time/energy it costs to let the AI fix things that it broke.
I totally understand this technology is still new and improving. But it would really save a lot of frustration if your engineers could build in behaviour that keeps working code in tact somehow. Perhaps automatically generate automated tests that do regression testing and bug fixing after every prompt. I wouldn’t mind waiting 10 minutes if the AI finds and fixes it’s own bugs.
I’m not sure of your workflow, but this might be helpful:
We recently launched an integration with CodeRabbit, and have an upcoming Events with them soon which may be interesting. Anyway, thanks again for the feedback and appreciate you reporting these to us