How Vercel AI Gateway handles Claude thinking blocks and signatures with tool calls

Current vs. Expected Behavior

I’m using AI Gateway’s OpenAI-compatible REST API (direct fetch to https://ai-gateway.vercel.sh/v1) with reasoning models and tool calls. Through testing, I found that for DeepSeek’s thinking model, I need to use the reasoning field (not reasoning_content) when sending back assistant messages, and Gateway handles the conversion to DeepSeek’s native format. This works as expected.

However, I’m confused about Claude’s behavior. According to Anthropic’s documentation, when using extended thinking with tool calls, you must pass thinking blocks back to the API with the complete unmodified block including the signature field. The signature is critical for verification and maintaining reasoning continuity.

When I call Claude (anthropic/claude-sonnet-4.6) through Gateway with reasoning enabled, the response includes both a reasoning string field and a reasoning_details array containing the full thinking block with signature:

{
  "reasoning": "The user wants to know the weather...",
  "reasoning_details": [
    {
      "type": "reasoning.text",
      "text": "The user wants to know the weather...",
      "signature": "EvgBCkYICxgCKkA...",
      "format": "anthropic-claude-v1",
      "index": 0
    }
  ],
  "tool_calls": [...]
}

The surprising part is that when sending back the assistant message with tool results, all three approaches succeed: passing no reasoning field at all, passing only the reasoning string, or passing reasoning_content. None of them cause an error, even though I’m not explicitly passing the signature.

Questions

My main questions are about Claude’s thinking block handling through Gateway:

  1. Does AI Gateway automatically preserve and reattach the signature internally when I only pass the reasoning string back? Or is the signature somehow not required when going through Gateway?
  2. Should I explicitly pass back reasoning_details (with signature) for better cache optimization and to ensure reasoning continuity, even though omitting it doesn’t cause errors?
  3. Is there any difference in model behavior or response quality between passing the full reasoning_details versus just the reasoning string?

Reproduction Steps

I’m making direct REST API calls without the AI SDK. The basic flow is:

  1. Send a user message with tools and reasoning enabled.
  2. Receive an assistant message with reasoning and tool_calls.
  3. Send back the assistant message along with the tool result.

For Claude, all variations of the assistant message (with or without reasoning fields) succeed, which is different from DeepSeek where the reasoning field is strictly required.

Project Information

  • Endpoint: Direct REST API calls to AI Gateway endpoint
  • Model: anthropic/claude-sonnet-4.6
  • Configuration: { enabled: true, max_tokens: 2000 }

Yes, Gateway automatically preserves and reattaches signatures internally when you pass only the reasoning string back. The docs confirm Gateway preserves reasoning details from models across interactions and handles the normalization.

You don’t need to explicitly pass reasoning_details with signatures. Gateway handles this internally. This is why all three approaches (no reasoning, just reasoning string, or reasoning_content) work without errors.

There’s no difference in model behavior whether you pass full reasoning_details or just reasoning. Gateway’s normalization ensures the model receives what it needs regardless of what you send.

The Gateway abstracts away provider-specific requirements (like Claude’s signatures or DeepSeek’s reasoning field) to give you a consistent interface.

This lets you switch between models without rewriting your conversation management logic - exactly what you’re experiencing!

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.