I’m encountering an issue when using the @ai-sdk/google
integration in the Vercel AI SDK with the gemini-2.0-flash-001
model. I’m trying to use the streamText
function along with tools and the experimental_output
option.
code:
import { createGoogleGenerativeAI } from "@ai-sdk/google";
import { Output, smoothStream, streamText, tool } from "ai";
import { z } from "zod";
const model = createGoogleGenerativeAI({
apiKey: EncodedVideoChunk.GEMINI_API_KEY,
});
const Schema = z.object({
response: z.string(),
});
const getWeatherInformation = tool({
description: "show the weather in a given city to the user",
parameters: z.object({ city: z.string() }),
execute: async ({ city }) => {
console.log(`Getting weather information for ${city}`);
return `The weather in ${city} is sunny`;
},
});
export const tools = {
getWeatherInformation,
};
const result = streamText({
model: model("gemini-2.0-flash-001"),
messages: [
{ role: "system", content: "You are a helpful assistant." },
{ role: "user", content: "Give me weather of gujarat" },
],
tools,
maxSteps: 2,
onError: (error) => {
console.error("Error while streaming:", JSON.stringify(error, null, 2));
},
onFinish: (result) => {
console.info("Output: " + JSON.stringify(result.object, null, 2));
console.info("Tool calling: " + JSON.stringify(result.toolCalls, null, 2));
},
experimental_transform: smoothStream(),
experimental_output: Output.object({
schema: Schema,
}),
});
for await (const delta of result.textStream) {
console.info("------------------->", delta);
console.info("\n\n");
}
However, I’m consistently getting the following error:
{
"error": {
"code": 400,
"message": "For controlled generation of only function calls (forced function calling), please set 'tool_config.function_calling_config.mode' field to ANY instead of populating 'response_mime_type' and 'response_schema' fields. For more details, see: https://cloud.google.com/vertex-ai/generative-ai/docs/multimodal/function-calling#tool-config",
"status": "INVALID_ARGUMENT"
}
}
Interestingly, when I use the OpenAI integration with a similar setup (streaming text with tools), it works without any issues.
Could you please help me understand what might be causing this error with the Google Gemini model and how I can resolve it? It seems like there might be a specific configuration requirement for Gemini when using tools and experimental_output
with streamText
that I’m missing.
Any guidance or suggestions would be greatly appreciated.
Thanks!