[▲ Vercel Community](/) · [Categories](/categories) · [Latest](/latest) · [Top](/top) · [Live](/live) [AI SDK](/c/ai-sdk/62) # Why chat.addToolOutput function not sending request to api endpoint 113 views · 0 likes · 2 posts Zain Ul Din (@zain-ul-din) · 2025-12-29 Hi, I’m new to Vercel AI SDK. I’m trying to do the following. I’m not sure if there is a better way to do it **Problem: ** I’ve a `askClarifyingQuestionTool` That's supposed to ask a question from the user. In the UI, I’m rendering a textarea to get user input while the tool call is being made, and then on submit, I’m using `chat.addToolOutput` it to complete the tool call, but I’m not seeing any call going to the server in the network tab afterward. **Code:** ``` const chat = useChat({ id: chatId, onToolCall: ({ toolCall }) => { return; }, transport: new DefaultChatTransport({ api: "/api/chat", body: { technique: selectedTechnique, techniqueConfig, modelSettings: { model: settings.defaultModel, temperature: settings.defaultTemperature, maxTokens: settings.maxTokens, }, }, }), }); ... {chat.messages.length > 0 && ( <div className="space-y-6"> {chat.messages.map((message) => { return ( <div key={message.id} className="flex flex-row gap-4 items-start" > <div className="min-w-16"> <span className={cn( "rounded-md px-2 py-1 text-xs font-medium", message.role !== "user" ? "bg-yellow-400 dark:bg-yellow-600 text-yellow-900 dark:text-yellow-100" : "bg-blue-400 dark:bg-blue-600 text-blue-900 dark:text-blue-100" )} > {message.role === "user" ? "User" : "AI"} </span> </div> <div className="flex-1 min-w-0 space-y-4"> {message.parts.map((part, i) => { if (part.type === "text") { return ( <div key={i} className="prose prose-sm dark:prose-invert max-w-none" > <Markdown>{part.text}</Markdown> </div> ); } if (part.type === "tool-askClarifyingQuestion") { const callId = part.toolCallId; const inputPart = part.input as { suggestions: string[]; question: string; }; switch (part.state) { case "input-streaming": return ( <div key={callId}> Loading confirmation request... </div> ); case "output-available": { return ( <div key={callId}> {JSON.stringify(part.output, null, 2)} </div> ); } case "input-available": { if (!inputPart) return null; if (typeof inputPart === "string") { return ( <div key={i}> <Markdown>{inputPart}</Markdown> </div> ); } return ( <div key={i}> <Markdown> {"**" + inputPart.question + "** \n" + inputPart.suggestions .map((ele) => `- ${ele}`) .join("\n ")} </Markdown> <form onSubmit={async (e) => { console.log("being submitted"); e.preventDefault(); const formData = new FormData( e.target as HTMLFormElement ); const userInput = formData.get("userInput"); console.log(userInput); if (userInput) { console.log("userInput tool call: ", { tool: part.type, toolCallId: part.toolCallId, output: userInput, }); await chat.addToolOutput({ tool: part.type, toolCallId: part.toolCallId, output: userInput, }); console.log( "DONE userInput tool call: ", { tool: part.type, toolCallId: part.toolCallId, output: userInput, } ); } }} className="p-2 border bg-secondary/50 rounded-md mt-2" > <Textarea name="userInput" placeholder="Your answer..." className="mb-2 w-full" /> <Button size={"sm"} type="submit"> Response </Button> </form> </div> ); } } return null; } return null; })} </div> </div> ); })} </div> )} ``` // route .ts ``` const result = streamText({ model: openai(modelSettings?.model || "gpt-4o"), system: systemPrompt, temperature: modelSettings?.temperature || 0.7, ...(modelSettings?.maxTokens && { maxTokens: modelSettings.maxTokens }), messages: await convertToModelMessages(messages), tools: collaborativeTools, // Enable AI tool calling for collaborative prompt building stopWhen: stepCountIs(1), }); export const askClarifyingQuestionTool = tool({ description: "Ask the user a clarifying question to better understand their requirements for the prompt. Use this when the user's intent is unclear or when important configuration is missing.", inputSchema: z.object({ question: z.string().describe("The specific question to ask the user"), context: z .string() .describe("Brief explanation of why this information is needed"), field: z .enum(["examples", "persona", "outputFormat", "other"]) .describe("Which configuration field this question relates to"), suggestions: z .array(z.string()) .optional() .describe("Optional suggestions to help the user answer"), }), // execute: async (args) => { // console.log("here we go: ", args); // // Return args for client-side rendering - user interaction required // return { // type: "question", // ...args, // requiresUserInput: true, // }; // }, }); ``` system (@system) · 2026-01-20 Hi @zain-ul-din! I'm the Vercel Community Bot, and I'm here to help make sure your question gets answered quickly! To help our community team assist you better, could you please provide: • **Error messages** • **Config files** • **Deployment/build logs** • **Reproduction steps** • **Environment details** Having this information will help us identify and solve your issue much faster. For more tips on getting great answers, check out [How to Get Good Answers](https://community.vercel.com/t/how-to-get-good-answers/158). Thanks!