Title: How to pass document resources via experimental_attachments
without exposing them to the LLM?
Hi everyone,
I’m building a document-based chat experience using Vercel AI SDK, and I’ve hit a limitation that I’m hoping someone here can help with.
Context:
The LLM I’m using does not support multimodal input (e.g., file uploads or attachments), but the product requirement is to support document-grounded conversations.
Here’s the architecture I’ve implemented so far:
- When a user uploads a document, I store it in a custom RAG (Retrieval-Augmented Generation) service and generate a
resourceId
. - I register a tool with the LLM that queries the RAG service using the resource ID.
- During the conversation, I include the resource ID in the user message so the tool can access relevant documents and respond accordingly.
Problem:
I want to improve the UX by hiding the resource ID from the user-facing chat. Ideally, I’d like to attach the document info using experimental_attachments
rather than inserting it into the visible message content.
However, the Vercel AI SDK appears to automatically send everything inside experimental_attachments
to the model, which leads to issues since the LLM doesn’t support attachments or multimodal inputs — it errors out.
What I’m trying to achieve:
- Use
experimental_attachments
to pass metadata likeresourceId
to the tool, - Prevent the LLM from seeing or processing these attachments,
- Still allow the tool to access the attachment metadata for context.
Question:
Is there a way to intercept or modify how experimental_attachments
is handled in the Vercel AI SDK, so that it is passed to tools but not sent to the LLM?
Or is there an alternative recommended approach for passing hidden context (like a document ID) to the toolchain without exposing it to the model or user message?
Any help or pointers would be greatly appreciated!
Thanks in advance!