I’m using the Vercel AI SDK (ai package) to stream responses from a model, and I want to include extra provider metadata — specifically the cost — in the final finish metadata sent to the client.
Here’s the simplified version of my code:
const result = streamText({
prompt,
model: gateway(ai_model)}
);
const providerMetadataPromise = result.providerMetadata;
return result.toUIMessageStreamResponse({
originalMessages: messages,
sendReasoning: true,
messageMetadata: ({ part }): Record<string, string> | undefined => {
if (part.type === 'start') {
return { model: ai_model };
}
if (part.type === 'finish') {
let answerCost: string;
providerMetadataPromise.then(data => {
answerCost = (data!.gateway as any).cost;
});
return { model: ai_model, cost: answerCost! };
}
},
});
-
If I use
await result.providerMetadatabefore streaming starts, I get the cost correctly — but it blocks the start of the stream. -
If I remove the
awaitand use.then(), streaming starts instantly — but the cost in thefinishmetadata is alwaysundefinedon the client. -
I also tried making
messageMetadataasync, but it seems the SDK doesn’t await it, so the cost never arrives.
I want:
-
The streaming to start immediately (no delay).
-
The final
finishmetadata to contain both{ model, cost }values. -
The client to actually receive the cost along with the last message.
Is there a correct, non-blocking way to include async provider metadata (like cost) in the finish metadata for a streaming response in the Vercel AI SDK?