v0.dev seems unbearable and unpractical for any sort of half decent project for no code. Is it possible to add some long term memory? It’s awfully painful having something built (working) and a simple prompt for something sometimes unrelated breaks your whole code/regenerates it because you didn’t name every step of your project in the prompt and everything you’ve added in great detail. Even if you do include it in the prompt there’s no guarantee it seems it doesn’t still break things. Feels like a roll of the dice each time you prompt. It gets to a point it becomes unmanageable to keep things working when the project gets to a certain size but I feel like having some memory would surely fix this?
Am I doing something wrong? Any help or feedback would be appreciated
Did you know there’s another category dedicated to v0 topics? A human should be here soon to help, but the answer may already be available even faster in one of these other posts.
Our docs are a great place to start.
We also have a walkthrough to help guide your workflow.
And these recordings can give you a look at v0 features and strategies in action shown by our Community:
Hi @brandynemail-gmailco, I’m sorry to hear about your experience with v0. I think the best way to come around this issue is to be as descriptive and precise in your prompts as you can be. Because like any other LLM product, v0’s outputs also depend heavily on the quality of the input.
Now, I understand that sometimes the results may be subpar even with good prompts, in such cases I’d recommend you to restore to the previous working version immediately and try again with a tweaked prompt.
The v0 team is constantly working to improve the user experience here and I’m sure with time these issues will also get better.
So to clarify long term memory isn’t an option and I can’t build in prompts for it to always remember? I basically need to copy and paste my previous prompt and add onto it with each ongoing prompt to continue to scale?
Hi @brandynemail-gmailco, v0 does remember the context of the whole chat if that’s what you mean by the “long term memory”. What I meant previously is that as the context gets larger and larger, in some cases the quality of output might not be as great as we wish if the prompt/message isn’t precise and descriptive enough.
I’d also like to point out the Knowledge feature, where you can add some common context about your project to help v0.