[▲ Vercel Community](/) · [Categories](/categories) · [Latest](/latest) · [Top](/top) · [Live](/live)

[Help](/c/help/9)

# Gemini 3 Flash - Disabled thinking/reasoning

129 views · 4 likes · 4 posts


edgarhq (@grundmanise-5870) · 2026-03-01

## Problem

For the `gemini-3-flash-preview`, there is support for a non-thinking option through `thinking_level` (or the OpenAI-compatible `reasoning_effort`) when a `minimal` value is provided. However, I’m unable to enable this mode when using AI Gateway through the `https://ai-gateway.vercel.sh/v1/chat/completions` endpoint.

In the [documentation](https://vercel.com/docs/ai-gateway/sdks-and-apis/openai-compat/advanced#provider-specific-behavior), it says that when using the `reasoning` options, they should be mapped to the correct config options for the provider. But I can’t get it to work with the `minimal` setting.

## Current Behavior

None of the following combinations seem to work:

* `reasoning.enabled` set to `false`
* `reasoning.effort` set to `none` or `minimal`

Although reasoning output isn’t present in the response when `reasoning.enabled` is `false`, I consistently see `reasoning_tokens` in the response. The response duration also suggests that reasoning was still performed when comparing the same request directly with the provider.

I’ve also tried using `providerOptions` to set thinking and reasoning levels, but to no avail.

## Request Example

```bash
curl -X POST "https://ai-gateway.vercel.sh/v1/chat/completions" \
  -H "Content-Type: application/json" \
  -H "Authorization: Bearer YOUR_AI_GATEWAY_API_KEY" \
  -d '{
    "model": "google/gemini-3-flash",
    "messages": [{"role": "user", "content": "Hello"}],
    "stream": false,
    "reasoning": {"enabled": false, "effort": "minimal"}
  }'
```

I understand that `minimal` !== “no reasoning”; but for the same request sent directly to the provider, I can consistently confirm that it truly has no reasoning executed.

Did someone succeed with a similar task, or have a suggestion on how to achieve it?


Zachary (@zdge) · 2026-03-01 · ♥ 2

Try this:
```bash
curl -X POST "https://ai-gateway.vercel.sh/v1/chat/completions" \
    -H "Authorization: Bearer YOUR_AI_GATEWAY_API_KEY" \
    -H "Content-Type: application/json" \
    -d '{
      "model": "google/gemini-3-flash",
      "messages": [
        { "role": "user", "content": "Hello" }
      ],
      "stream": false,
      "providerOptions": {
        "google": { "thinkingConfig": { "thinkingLevel": "minimal" } }
      }
    }'
```


edgarhq (@grundmanise-5870) · 2026-03-01 · ♥ 1

Thanks, that works! The issue was that I was providing both `providerOptions` and `reasoning` configs at the same time, and it seems like `reasoning` takes precedence. It would be great to update the [documentation](https://vercel.com/docs/ai-gateway/sdks-and-apis/openai-compat/advanced#provider-specific-behavior) to reflect this, or if the problem is with the mapping, to fix it in the gateway. Either way, thanks a lot!


Zachary (@zdge) · 2026-03-01 · ♥ 1

Glad to hear it works for you, and thanks for the feedback! I'll relay this to the AI Gateway team so we can make improvements to our documentation.