I’m trying to implement AI to my application with persistent chat history. For the basics, I had to manually construct the user prompt and assistant response, including its reasoning details, and store it to an array of ModelMessage so the context persists.
Implementation Overview
Here is the code snippet for an overview, which is quite basic:
const context: ModelMessage[] = await loadContext(user_id);
// Construct a prompt
const constructedContent: Array<ImagePart | TextPart> = [];
// Append the user's text prompt
constructedContent.push({
type: 'text',
text: prompt,
});
const constructedPrompt: UserModelMessage = {
role: 'user',
content: constructedContent,
};
// Append the latest prompt to the context
context.push(constructedPrompt);
const outputs = await generateText({
model: openrouter.chat("google/gemini-3-flash-preview", {
reasoning: {
enabled: true,
effort: "medium",
},
}),
messages: context,
system: SYSTEM_PROMPT,
temperature: 1,
});
// Append the assistant's response to the context
context.push(...outputs.response.messages);
// save the updated context
await saveContext(user_id, context);
For reference:
generateContent.ts:
This file has been truncated. show original
contextMemory.ts:
This file has been truncated. show original
Normally for models like GPT-4o and Gemini, I had to store the reasoning details as per recommendations from the official docs. I lazily pushed the response objects to the array of context, not just manually constructing the assistant response.
Saving works, as you can see right here:
Loading Chat History
Now when loading the chat history, I get this error:
It took a while to figure out, but it is an Invalid prompt error. Using convertToModelMessages() doesn’t work as it is intended for UIMessage objects.
Trying to do so resulted in this:
Workaround
The workaround is to remove the providerOptions field, which works, but it would also mean trimming the reasoning traces with it.
Would be nice if there’s practical guidance on how to overcome this; all I want is to make context persistence work including its reasoning traces. There’s not even proper documentation on how to do so.
Environment
@ai-sdk/openai:^3.0.48@openrouter/ai-sdk-provider:^2.3.3ai:^6.0.142



