[▲ Vercel Community](/) · [Categories](/categories) · [Latest](/latest) · [Top](/top) · [Live](/live)

[AI SDK](/c/ai-sdk/62)

# How do I manually save and load chat context properly with reasoning traces in Vercel AI SDK

12 views · 1 like · 1 post


Wyatt Marcus Bautista (@buffer0verfl0w1030-3) · 2026-04-02 · ♥ 1

I’m trying to implement AI to my application with persistent chat history. For the basics, I had to manually construct the user prompt and assistant response, including its reasoning details, and store it to an array of `ModelMessage` so the context persists.

## Implementation Overview

Here is the code snippet for an overview, which is quite basic:

```tsx
const context: ModelMessage[] = await loadContext(user_id);

// Construct a prompt
const constructedContent: Array<ImagePart | TextPart> = [];

// Append the user's text prompt
constructedContent.push({
    type: 'text',
    text: prompt,
});

const constructedPrompt: UserModelMessage = {
    role: 'user',
    content: constructedContent,
};

// Append the latest prompt to the context
context.push(constructedPrompt);

const outputs = await generateText({
    model: openrouter.chat("google/gemini-3-flash-preview", {
        reasoning: {
            enabled: true,
            effort: "medium",
        },
    }),
    messages: context,
    system: SYSTEM_PROMPT,
    temperature: 1,
});

// Append the assistant's response to the context
context.push(...outputs.response.messages);

// save the updated context
await saveContext(user_id, context);
```

For reference:

**generateContent.ts**:
This file has been truncated. [show original](https://github.com/zavocc/jakey-typescript/blob/pre-alpha/src/lib/llm/chat/generateContent.ts)
**contextMemory.ts**:
This file has been truncated. [show original](https://github.com/zavocc/jakey-typescript/blob/pre-alpha/src/lib/llm/chat/contextMemory.ts)

Normally for models like `GPT-4o` and `Gemini`, I had to store the reasoning details as per recommendations from the official docs. I lazily pushed the response objects to the array of context, not just manually constructing the assistant response.

Saving works, as you can see right here:

![Image](upload://iUAwHNXOQRganTErbUKaOEwpIX6.png)

![Image](upload://zqs2K3Vj9URMwPD6QLi83uyri5c.png)

## Loading Chat History

Now when loading the chat history, I get this error:

![Image](upload://skZUfiAD3JyIs0DBBoP8f27bYDp.png)

It took a while to figure out, but it is an [Invalid prompt error](https://ai-sdk.dev/docs/reference/ai-sdk-errors/ai-invalid-prompt-error). Using `convertToModelMessages()` doesn’t work as it is intended for `UIMessage` objects.

Trying to do so resulted in this:

![Image](upload://e0KSABqQpX5RCzbM7rh4Zv0PbyE.png)

## Workaround

The workaround is to remove the `providerOptions` field, which works, but it would also mean trimming the reasoning traces with it.

Would be nice if there’s practical guidance on how to overcome this; all I want is to make context persistence work including its reasoning traces. There’s not even proper documentation on how to do so.

## Environment

* `@ai-sdk/openai`: `^3.0.48`
* `@openrouter/ai-sdk-provider`: `^2.3.3`
* `ai`: `^6.0.142`