Is it just me, or has the UI quality in v0 gotten worse?

I’ve been using v0 for a while now and I originally chose it because the UI designs were clean, premium, and way better than anything other AI builders could produce. That was the main reason I picked v0 over other tools.

But now, after the change from the old monthly subscription to this new usage-based model with tokens, I feel like the design quality dropped hard. No matter how I write the prompts, the results are super generic, low quality, and honestly unusable for serious projects. It feels like the AI just throws random blocks on the page without any taste or structure.

I’m saying this with full respect, because I really liked v0 and I still want to use it, but something’s definitely off lately. It’s not just a small drop it’s a massive downgrade in what used to be the strongest feature of the platform.

If anyone else feels the same, reply or react to this message so maybe the team can hear us and bring back what made v0 special in the first place.

This is honest feedback from someone who believed in this tool and still wants it to succeed.

2 Likes

Appreciate you reaching out to share your experience. Do you have any specific examples I can share with the team?

There’s not much detail to go on here. It would be really helpful to see some examples of your prompts and the output you didn’t like. Otherwise, we can’t be sure what should be changed :slightly_smiling_face:

1 Like

Thanks for the response. I understand the need for more context, so here’s a bit more detail.

I’ve built two custom GPTs specifically to work with v0. One is focused on UI design, and the other on development structure. I’ve studied how v0 works and crafted my prompts based on that. These GPTs generate highly tailored instructions meant to get the best out of the system.

Even with all of that, the output has become consistently generic. The sections feel disconnected, the components look like basic UI kits, and there’s barely any animation or structure. It’s just stacked blocks with placeholder text and colored backgrounds. Even when I reference modern design, strong brands, and ask for high-level creative layout, the results still fall flat.

When I first started using v0, the designs had more depth. There were better font choices, spacing, animations, and a clear sense of structure. Now that feeling is gone. I noticed the shift around the same time the pricing moved from a monthly subscription to the new usage-based token model. I’m not saying it’s directly related, but that’s when the drop became noticeable.

That said, I don’t want to just blame the tool. Maybe there’s something I’m missing. Maybe there’s a new way of structuring prompts or designing workflows that your team knows and we don’t. If you’re part of the team or close to it, I’d really appreciate it if you could share how we can structure the best possible UI outputs using v0 right now. If there’s a better method than using custom GPTs, I’m open to learning it.

At the end of the day, we just want to create great results. So if you have insight, we’re listening.

Hi Amy,

rather than relying on examples from users, why not try running prompts yourself and see what the GPT delivers? I’ve had so many experiences of the GPT failing to follow instructions and I even had it state that it did so because it was being lazy…still took money from my tokens.

I’m sure like my fellow users, I don’t really have the time to supply Vercel with specific examples, but I think you can surmise from the numerous forum posts on this that it exists and is a real issue. I’ve had a month break from V0 and this still has not been resolved.

Kind regards,

Brian

Useful feedback @animerezz-gmailcom! Thank you for sharing that. I’ll hand it over to the team to help guide future iterations.

There’s some great tips in the community handbook if you want to learn more about prompting strategies. I think the topics on theming, design mode, and strategic forking would be useful for you.

We also have an upcoming live session with Claire Vo where you can ask questions and learn more.

@brianmrobinson50-733 We use v0 every day at Vercel. Dogfooding is very much part of the culture here.

When we ask for examples, it’s because “it’s not good” doesn’t give us enough detail to know what should be changed. Imagine going to a doctor, telling them “I feel unwell”, and expecting them to be able to fix you with no other details. In the absence of actual telepathy, of course, we need to ask for more information. :smile:

Remember that every message you send in a chat teaches the AI to respond in a way that it expects you want. If you call it stupid and tell it that it’s lazy, then the responses you get will start to match that expectation. The community handbook topics about v0 linked above can help you learn how to get LLMs to give the output you want.

Hope that helps!

Hi Amy,

thanks for the reply, it is much appreciated. I also appreciate what you are saying, but if I was a doctor and I had 100 unconscious patients, I would run all sorts of tests to see if I can identify/triage the root cause of the problem and not rely on accompanying evidence. There is obviously an issue, as I have 100 unconcious patients! In the same way, there are numerous complaints on these forums that payment is taken despite the GPT failing to actually carry out the prompt.

I always follow the advice in the community handbook and work with LLM’s at my place of work…I run my prompts via ChatGPT before inputting to V0, yet it often takes a payment but doesn’t apply any meaningful changes to the code.

Kind regards,

Brian