I’m trying to integrate MCP (Model Context Protocol) tools with AI SDK UI, but the tools are being detected and triggered without actually executing. The stream shows tool-input-available events but then stops without calling the tools or returning results.
Frontend Code
async function handleSubmit(message: PromptInputMessage) {
const chat = new Chat({
transport: new DefaultChatTransport({
api: `${this.host}/api/chat`,
}),
sendAutomaticallyWhen: lastAssistantMessageIsCompleteWithToolCalls,
});
const hasText = Boolean(message.text?.trim());
const hasAttachments = Boolean(message.files?.length);
if (!hasText && !hasAttachments) return;
try {
const tools = await McpUtils.getTools();
chat.sendMessage(
{
text: hasText ? message.text : "Sent with attachments",
},
{
body: {
model: selectedModel.value?.id,
mcpEnabled: hasEnabledServers.value,
tools,
},
}
);
message.text = "";
} catch (error) {
console.error("Failed to send message", error);
}
}
Backend Code (Nuxt4)
// http://localhost:3000/api/chat
export default defineEventHandler(async (event) => {
const {
messages,
model,
tools: clientTools,
}: {
messages: UIMessage[];
model: string;
tools?: Record<string, any>;
} = await readBody(event);
const tools: ToolSet = {};
if (clientTools) {
for (const [name, toolDef] of Object.entries(clientTools)) {
tools[name] = tool({
description: toolDef.description,
parameters: jsonSchema(toolDef.parameters),
} as any);
}
}
const result = streamText({
model: model || "xai/grok-code-fast-1",
messages: convertToModelMessages(messages),
tools,
stopWhen: stepCountIs(10),
});
return result.toUIMessageStreamResponse();
});
Current Behavior
The stream response shows the model recognizes the tool and prepares to call it:
data: {"type":"tool-input-start","toolCallId":"call_21699637","toolName":"get-current-mod-list"}
data: {"type":"tool-input-delta","toolCallId":"call_21699637","inputTextDelta":"{}"}
data: {"type":"tool-input-available","toolCallId":"call_21699637","toolName":"get-current-mod-list","input":{}}
data: {"type":"finish-step"}
data: {"type":"finish","finishReason":"tool-calls"}
data: [DONE]
However:
- The tool is never actually executed
- No tool result is returned
- The UI shows “Running…” indefinitely
- No follow-up response from the model
Expected Behavior
The tool should:
- Be executed with the provided input
- Return results to the model
- Allow the model to continue and provide a final response to the user
Questions
-
Tool Definition Format: Is there a specific format required for passing MCP tool definitions through
useChat’s body options? Should I be using a different structure than raw MCP tool schemas? -
Tool Execution: Does AI SDK UI expect tool execution to happen on the frontend or backend? Do I need to implement tool execution handlers separately?
-
Tool Results: How should tool results be sent back to continue the conversation? Should this happen automatically or do I need to manually append tool result messages?
-
Missing Configuration: Are there additional configuration options needed for
streamTextoruseChatto properly handle tool execution in this MCP context?
Environment
- AI SDK Version: [v6]
- Framework: Nuxt4 + vue3
- Model: xai/grok-code-fast-1
- MCP Integration: Custom MCP client
Additional Context
The MCP tools are retrieved successfully from McpUtils.getTools() and contain valid tool definitions with descriptions and JSON schemas. The model clearly understands the tools are available (as shown in the reasoning deltas), but the execution pipeline seems incomplete.
Any guidance on the proper pattern for integrating MCP tools with AI SDK UI would be greatly appreciated!
My project GitHub: GitHub - GlossMod/GlossCopilot
Backend Code: GitHub - aoe-top/copilot.aoe.top
The MCP used for testing is from another project of mine: Gloss-Mod-Manager/src/model/MCPServer.ts at main · GlossMod/Gloss-Mod-Manager · GitHub