[▲ Vercel Community](/) · [Categories](/categories) · [Latest](/latest) · [Top](/top) · [Live](/live)

[Discussions](/c/community/4)

# How to hide experimental_attachments from the LLM?

174 views · 0 likes · 1 post


Jessyan0913 (@jessyan0913) · 2025-04-16

**Title:** How to pass document resources via `experimental_attachments` without exposing them to the LLM?

Hi everyone,

I'm building a document-based chat experience using Vercel AI SDK, and I've hit a limitation that I’m hoping someone here can help with.

### Context:

The LLM I’m using **does not support multimodal input** (e.g., file uploads or attachments), but the product requirement is to support document-grounded conversations.

Here’s the architecture I’ve implemented so far:

1. When a user uploads a document, I store it in a custom RAG (Retrieval-Augmented Generation) service and generate a `resourceId`.
2. I register a tool with the LLM that queries the RAG service using the resource ID.
3. During the conversation, I include the resource ID in the user message so the tool can access relevant documents and respond accordingly.

### Problem:

I want to **improve the UX by hiding the resource ID from the user-facing chat**. Ideally, I’d like to attach the document info using `experimental_attachments` rather than inserting it into the visible message content.

However, the Vercel AI SDK appears to **automatically send everything inside `experimental_attachments` to the model**, which leads to issues since the LLM doesn’t support attachments or multimodal inputs — it errors out.

![image|690x303](upload://wRigvUfUW0xy7Et0ENnu8jEAAZ4.png)


### What I’m trying to achieve:

* Use `experimental_attachments` to pass metadata like `resourceId` to the tool,
* Prevent the LLM from seeing or processing these attachments,
* Still allow the tool to access the attachment metadata for context.

### Question:

Is there a way to intercept or modify how `experimental_attachments` is handled in the Vercel AI SDK, so that it **is passed to tools but not sent to the LLM**?

Or is there an alternative recommended approach for passing hidden context (like a document ID) to the toolchain without exposing it to the model or user message?

Any help or pointers would be greatly appreciated!

Thanks in advance!