V0 Platform API - Webhooks bug report

It seems like I found a bug with the webhook endpoint of the v0 API, so I am submitting here so that you guys at v0 can check and let us know.

Bug report below (AI generated but extremely accurate - I checked before submitting):

v0 Webhook Intermittent Event Delivery Report

Summary

During a multi-step pipeline that sends multiple prompts to the same chat, in sequence, one per webhook notification, the v0 Platform abruptly stopped delivering webhook notifications partway through the job. Our system relies on those events—particularly message.finished—to know when a generated page is done so we can dispatch the next one. After the first few pages were processed without issue, no further events arrived for roughly twelve minutes. Only when we manually nudged the conversation through the v0 UI did deliveries resume and the job complete. Throughout the pause our infrastructure remained healthy: every request we receive is persisted immediately, all responses were HTTP 200, and no errors triggered.

Environment

  • v0 Platform API – production endpoints (https://api.v0.dev/v1/...).

  • Webhook listener – HTTPS serverless function that logs every payload to persistent storage before doing any other work.

  • Event subscription – for this chat we subscribed to message.finished and message.created, the two events that should indicate an assistant message has arrived.

  • Persistence layer – Supabase/PostgreSQL powers the idempotency and dispatch guards. Every successfully processed assistant message inserts a guard row keyed by the message ID and the page URL.

What Happened (UTC)

  • 03:56 – We create the chat, subscribe the webhook, and begin sending prompts to v0 one at a time.

  • 03:57 – The first message.created arrives. We confirm the assistant has finished (“finishReason: stop”) and immediately dispatch the next prompt This repeats smoothly for several prompts.

  • 04:02 – Five prompts remain in the queue. We log the last automatic dispatch. Then everything goes quiet.

  • 04:02 – 04:12 – No webhook calls arrive. Our persistence tables remain unchanged and no errors are logged. The pipeline is idle, waiting for v0 to send the next assistant completion.

  • 04:12 – After we manually send v0 “continue” inside the v0 chat UI, another message.finished arrives and our pipeline instantly pushes the next prompt.

  • 04:13 – 04:20 – Each subsequent manual nudge triggers a fresh message.finished, allowing us to drain the remaining five prompts. The final nudge produces the last assistant response and this stage of the pipeline is complete, downstream steps are unrelated.

Once v0 resumed delivery, everything on our side behaved as designed: every completion wrote its guard row, the next page was dispatched exactly once, and the run ended with unprocessed_pages empty.

Observed Behavior

  1. The first portion of the run is normal: v0 sends message.finished, our webhook consumes it, and the next page is sent.

  2. Then deliveries simply stop. There is no error, no retry, and no alternate event type during the quiet window.

  3. The webhook endpoint sees no traffic at all; every serverless invocation is accounted for.

  4. When we type “continue” in the v0 UI, a new message.finished appears and our automation immediately resumes.

Expected Behavior

Whenever the assistant finishes a message, v0 should send us one of the subscribed completion events without manual intervention. That allows the automation to keep moving through the queue. Once v0 resumes, the remaining pages finish without any human touch.

Data Collected

  • The webhook-event log stops at 04:02 and resumes at 04:12 with no entries in between. Every resumed delivery corresponds to one manual prompt.

  • For each assistant message we successfully processed, a matching guard row exists in the database (e.g., page_dispatched: for the page URL and v0_msg_processed: for the message ID), proving we only handled each event once.

  • The serverless function returned 200 OK on every call, and there are zero errors or timeouts in the hosting logs.

Impact

  • The automation sat idle for roughly twelve minutes and required someone to babysit the chat.

  • Without intervention, the remaining pages would never have been generated.

  • Runs that depend on unattended processing are effectively blocked when this happens.

Request to the v0 Team

  1. Please investigate why webhook delivery paused despite no visible errors on either side.

  2. Let us know if there are circumstances (long-running chats, multiple messages queued, etc.) where the platform intentionally withholds events or expects polling.

  3. If there are additional event types we should listen for—or recommended fallbacks when the platform appears idle—please advise so we can stay aligned with best practices.

FINISH AI GENERATED REPORT - {I actually re-read it multiple times, everything checks out and is confirmed as per the above report}

I will be happy to provide anonymized logs or run additional diagnostics if that can assist you guys with debugging.

TL;DR

Halfway through a long chat run, v0 randomly stopped sending webhook events for about twelve minutes. OUr endpoint was up, healthy, and logged nothing during the gap. Manually nudging the chat in the v0 UI caused events to resume and the job to finish. I’m asking v0 engineering to investigate why delivery stopped and whether there are known conditions where this can occur.

Hello? Issue still persists. Can someone from the staff please check?

Good lord please let someone at least acknowledge this post.

Hey, @k53n0!

Thanks for your patience :folded_hands:

Every message in the community gets surfaced to the appropriate product team so I can assure you that this has been seen by staff.

I’ll +1 this again internally. Appreciate the time you’ve taken to write this up :slight_smile: