AI SDK v5 Streaming Works Server-Side But Not in UI - OpenRouter

Hi Team

Problem Summary

I have a Next.js 15 app using AI SDK v5 with OpenRouter that shows streaming responses working perfectly on the server side (console shows all text-delta parts), but the UI displays the complete response all at once instead of streaming word-by-word.

Tech Stack

  • AI SDK: v5 (latest)
  • Next.js: 15
  • Provider: OpenRouter (Claude Sonnet 4, GPT-4o, etc.)
  • Streaming: Server-Sent Events with resumable streams
  • UI: React with useChat hook

Server-Side Evidence (Working)

🔥 STREAM PART: {"type":"text-delta","id":"gen-xxx","delta":"Hi "}
🔥 STREAM PART: {"type":"text-delta","id":"gen-xxx","delta":"there, "}
🔥 STREAM PART: {"type":"text-delta","id":"gen-xxx","delta":"Driss! "}
// ... continues streaming properly

Client-Side Issue

  • UI shows complete response instantly instead of streaming
  • useChat with experimental_throttle: 100
  • Using JsonToSseTransformStream() in response
  • Resumable stream context with Redis

Key Code Snippets

API Route:

const stream = createUIMessageStream({
  execute: ({ writer: dataStream }) => {
    const result = streamText({
      model: userProvider.languageModel(selectedChatModel),
      experimental_transform: smoothStream({ chunking: 'word' }),
      // ... other config
    });
    
    result.consumeStream();
    dataStream.merge(result.toUIMessageStream({ sendReasoning: true }));
  }
});

return new Response(
  streamContext 
    ? await streamContext.resumableStream(streamId, () => 
        stream.pipeThrough(new JsonToSseTransformStream())
      )
    : stream.pipeThrough(new JsonToSseTransformStream())
);

Client:

const { messages, status } = useChat<UIMessage>({
  experimental_throttle: 100,
  transport: new DefaultChatTransport({
    api: '/api/chat',
    fetch: fetchWithErrorHandlers,
  }),
  onData: (dataPart) => setDataStream(ds => [...ds, dataPart])
});

What I’ve Tried

  • :white_check_mark: Verified JsonToSseTransformStream() is used
  • :white_check_mark: Server logs show proper streaming parts
  • :white_check_mark: smoothStream({ chunking: 'word' }) configured
  • :white_check_mark: experimental_throttle: 100 set
  • :cross_mark: UI still shows complete response at once

Has anyone encountered similar issues where server-side streaming works but client-side doesn’t update incrementally?

Thanks

1 Like

I’m also experiencing this exact issue. @driss-6946 were you able to find a solution to this?

I’ve also just upgraded to v5 and experiencing this issue - client running next.js uses useChat to send messages to next api where createUIMessageStream is set up with streamText to proxy to backend running chat completions endpoint.

Logging chunks in streamText.onChunk shows chunks are streaming, but UI seems to receive these chunks in large groups, as though buffering or something, instead of streaming the chunks to the UI the moment streamText receives them. Particularly noticeable for streaming reasoning tokens.

I have experimented with the same flags as OP, and also have set the following headers on createUIMessageStreamResponse to try to force streaming without buffering, to no avail:

```
“X-Accel-Buffering”: “no”,

“Content-Type”: “text/event-stream”
```

While the v5 package is great, the stuttering of the stream makes our app feels like it has regressed at times instead of progressed, so would be great to ensure chunks always stream

1 Like

Hey there! I had a similar issue before. The solution was to update the useMemo dependencies inside the messages component. It turned out that the update wasn’t triggering because of some changes in the props. I’m not sure about your case, maybe share your frontend

1 Like

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.