Frustrated with v0

I’ve been using v0 for over two months now with a premium subscription, but the experience has become increasingly frustrating as my project has grown. While v0 performed reasonably well when the project was small, it now struggles significantly with a medium-scale application. Several issues have persisted:

  • Generations often stop midway, and repeated attempts to retry them don’t help.
  • The generation process is frequently extremely slow, sometimes appearing to be stuck in an infinite loop.
  • Context retention is poor, leading to hallucinations and irrelevant outputs.
  • There’s no support for 2 way GitHub integrations, which makes it tedious to manually copy and paste changes into my existing project that’s already deployed on GitHub and hosted on Vercel. At the very least, there should be an option to connect a Vercel project directly to v0.

Given these ongoing problems and the lack of essential integrations, I’ve decided to cancel my subscription and explore other alternatives.

2 Likes

Thanks for sharing your experience. The team is working on upgrades and solutions, including 2 way sync with GitHub, but I can understand your frustration with it not being ready just yet.

I’ve shared your feedback with the team so we can make v0 even better :slightly_smiling_face:

1 Like

When I work in the IDE and push to GitHub, the way I make sure V0 gets the changes is by creating a fork from the “main” branch using the Project Settings option.

1 Like

Have you tried the new git sync feature yet? We just launched it this week. That should solve your problem with GitHub + IDE edits

1 Like

i would say the mayor downgrade or problem is with vercel new “context” feature, most of it.
i get it that it might be a useful function to save up a lot of compute whenever a user wants to create changes that would create global modifications on the app/project. and it happens to happen right after v30 v35 in most cases.

is vercel thinking on keeping that context function? maybe some kind of function or way for us to choose what we would like to use? on the first versions it always does global read of the project/code but then when context starts appearing the experience for the user really turns for the worse. most of the times are hallucinations or even changing styles when not requested… and so on and so forth.

please concentrate on that, because from where i see it, most of the problems and generations come from the “context” approach that v0 is taking when a project is on medium scale

Hey @ikkelucky. It isn’t so much that v0 is doing a full read of the code for the first versions. It’s more that the first versions are generally smaller and entirely within the initial context. An LLM’s capacity to remember things about the code is not unlimited, so what’s in the context window changes over time. It includes not just the code, but also message history to make sense of new prompts.

A common strategy for managing this is to fork to a new chat when you reach a stable version of a component or feature. Another option is to break the app into smaller components and work on them individually. Then, you can combine them when each piece is ready to be included.

If you’re curious, you can learn more about context windows here:

Thanks a lot, I read the whole article and learn a little bit about best practices to communicate with LLMs now.
thanks to understanding tokens and how they work i get a better idea on how my code or prompts can perform better when interacting with any project on v0

i imagine if i start using fewer tokens, direct the prompts on command (instead of “talking”) and resume a lot of the requests, also by writing less, i will take more advantage of context windows and be able to not “stress” the LLM that much and will be able to get a better coding experience and better projects?

I hope the reasoning around that was correct, either way i learned a lot and will start calculating a bit better my projects. Thanks and have a nice day

1 Like