Streaming text tool output

Using useChat, when streaming the tool response output we find that the text deltas coming in are not marked as output from a given tool, only that the tools output is ‘available’.

Is there a way to flag incoming text deltas as being output from a given tool?

Approximate implementation.

// backend (configs omitted for brevity)
const stream = streamText({
  /* ... */
  tools: {
    generateText: tool({
      execute: ({prompt}) => {
        /* ... */
        return streamText({
          prompt,
        });
      }
    })
  }
})

return stream.toUIMessageStreamResponse()
// front end
const useChatRefs = useChat({
    onToolCall: (toolCall) => {
      // this does fire once, when the tool is called and before the response 
      // starts streaming in
      console.debug({toolCall});
    },
    transport: new DefaultChatTransport({api: `${OUR_API_URL}/chat`}),
    onError: handleError,
});

We get messages with part with type tool-textGeneration however, even when the state is output-available, the output property is an object rather than a built-up text string (as in this example AI SDK UI: Chatbot Tool Usage)

{
    "type": "tool-textGeneration",
    "toolCallId": "tooluse_nI1eu2t1R3-JaKZY3FfcmA",
    "state": "output-available",
    "input": {
        "prompt": "Write a two paragraph story about an ant"
    },
    "output": {"_totalUsage": {"status": {"type": "pending"}},"_finishReason": {"status": {"type": "pending"}},"_steps": {"status": {"type": "pending"}},"includeRawChunks": false,"baseStream": {}}
}

Instead, the subsequent part (type: text) contains the building-up string for that tools streamed output.

See data chunks:

// notably, the tool _input_ deltas are sent to the front end...
data: {"type":"tool-input-delta","toolCallId":"tooluse_nI1eu2t1R3-JaKZY3FfcmA","inputTextDelta":"an ant\"}"}
data: {"type":"tool-input-available","toolCallId":"tooluse_nI1eu2t1R3-JaKZY3FfcmA","toolName":"textGeneration","input":{"prompt":"Write a two paragraph story about an ant"}}
data: {"type":"tool-output-available","toolCallId":"tooluse_nI1eu2t1R3-JaKZY3FfcmA","output":{"_totalUsage":{"status":{"type":"pending"}},"_finishReason":{"status":{"type":"pending"}},"_steps":{"status":{"type":"pending"}},"includeRawChunks":false,"baseStream":{}}}
data: {"type":"finish-step"}
data: {"type":"start-step"}
data: {"type":"text-start","id":"0"}
// our generated story, as un-marked text-deltas
data: {"type":"text-delta","id":"0","delta":"The smallest"}
data: {"type":"text-delta","id":"0","delta":" member of the colony,"}
data: {"type":"text-delta","id":"0","delta":" Pip the ant, ha"}

TL;DR:

How do we know that tool output is from a given tool on the front-end.

This happens because returning a streamText() result from a tool’s execute function doesn’t automatically stream with tool identification, it just shows up as normal assistant output.

The easiest fix is to enable toolCallStreaming in your streamText config so the tool’s output is properly tagged and streamed. For most cases, that’s all you need. But you can also manually collect stream chunks or handle tool output tracking on the frontend if you want more control.

Hi Pauline thanks for the response,

The documentation says that tool streaming should be on by default for v5 but that’s not what we observe.

For now we’ve switched to using createUIMessageStream and passing the writer into the tool, and manually sending the tool chunks out but this feels like a workaround rather than intended usage, and adds complexity.

1 Like

I think what they mean by tool streaming is that the input streaming part is currently implemented, meaning the tool input can be streamed in. However, there’s no implementation for output-available streaming, which I’d also need.

Here’s a related discussion:
https://community.vercel.com/t/can-i-stream-structured-tools-ouput/12124

<Message key={id} role={role}>
  {parts.map((part) => {
    switch (part.type) {
      case "text":
        return part.text;
      case "tool-queryTool": {
        switch (part.state) {
          case "input-streaming":
            // This part is streamed
            return <pre>{JSON.stringify(part.input, null, 2)}</pre>;
          case "input-available":
            return <pre>{JSON.stringify(part.input, null, 2)}</pre>;
          // This part is not streamed
          case "output-available":
            return <pre>{JSON.stringify(part.output, null, 2)}</pre>;
          case "output-error":
            return <div>Error: {part.errorText}</div>;
        }
      }
    }
  })}
</Message>

@Pauline Is support for output streaming on the roadmap?

This topic was automatically closed 90 days after the last reply. New replies are no longer allowed.