Claude model in the AI Gateway experiences stream interruption issues

IDE: Cline

Example: “Help me write a 500-line bubble sort algorithm in Python“

it works fine with OpenRouter but gets truncated when using Vercel AI Gateway with Claude models.

It seems to be a “Max Output” issue. Can you increase the default max_tokens of Claude’s model to 64k, like OpenRouter does?