[▲ Vercel Community](/) · [Categories](/categories) · [Latest](/latest) · [Top](/top) · [Live](/live)

[AI SDK](/c/ai-sdk/62)

# Text after tool usage

419 views · 1 like · 3 posts


Ziadoonalobaidi (@ziadoonalobaidi) · 2025-06-19


Hello guys, 

I am using a tool to generate tasks from a meeting, the tasks should be objects, 
it generates them well but

**The problem** after generating the object the Llm mistral always return a text part after the tool result


I tried by sending the tool as a call and not as a result but it gives en error

```
message: "Unexpected role 'user' after role 'tool'",
```
I saw the example of getWeather which involves calling more than one time and then using addToolResult in the frontend. but i want only in one  call & only the objects, i know i can generate objects using streamObjects, but i want the llm decide wether to use a specifc tool or not.

Code 

```
   const result = streamText({
            model: getMistralModel(selectedLlm),
            maxSteps: 5,
            system,
            messages: messages.slice(-6),
            tools: {
                proposeTodoCreation,
            },
        });

export const proposeTodoCreation = tool({
  description: "to create tasks or todos or to modify tasks or todos",
  parameters: z.object({
    todos: z
      .array(
        z.object({
          todo: z.string().describe(".."),
          assignee: z.string().describe("..."),
        })
      )
      .describe("The array of todos identified from the context"),
  }),
  execute: async ({ todos }) => todos,
});
```
I appreciate it, thanks :grinning_face_with_smiling_eyes:!


Amy Egan (@amyegan) · 2025-06-20 · ♥ 1

This is a common behavior with LLMs. They often add explanatory text after tool usage. 

I asked v0 for ways to avoid that. Here are the options it suggested:

## Option 1: Filter the response on the client side

Since you’re using `streamText`, you can process the final result and extract only the tool results:

```
const result = streamText({
    model: getMistralModel(selectedLlm),
    maxSteps: 5,
    system,
    messages: messages.slice(-6),
    tools: {
        proposeTodoCreation,
    },
});

// Extract only tool results
const toolResults = [];
for await (const part of result.fullStream) {
    if (part.type === 'tool-result') {
        toolResults.push(part.result);
    }
}
```

## Option 2: Modify your system prompt

Add instructions to your system prompt to prevent additional text after tool usage:

```
const system = `Your existing system prompt...

IMPORTANT: When using tools, only call the tool and do not provide any additional explanation or text after the tool execution.`;
```

## Option 3: Use the `onFinish` callback

You can use the `onFinish` callback to handle only the tool results:

```
const result = streamText({
    model: getMistralModel(selectedLlm),
    maxSteps: 5,
    system,
    messages: messages.slice(-6),
    tools: {
        proposeTodoCreation,
    },
    onFinish: ({ toolResults }) => {
        // Handle only the tool results here
        console.log(toolResults);
    },
});
```

The system prompt modification (Option 2) combined with client-side filtering (Option 1) usually gives the best results.


Ziadoonalobaidi (@ziadoonalobaidi) · 2025-06-23

I will check these solutions, Thank you Amy!