401 Unauthorized from v0.dev API when using Codex CLI

401 Unauthorized from v0.dev API when using Codex CLI (--provider vercel)

Account / Plan

  • v0.dev account email: loai1@…
  • Plan: <Premium / Team> ← (confirm what Billing shows)
  • API key last 4 chars: ...eT3 (begins v1:)

| Config path | /home/<user>/.codex/config.json (see contents below) |

~/.codex/config.json

{
  "providers": {
    "vercel": {
      "name": "Vercel",
      "baseURL": "https://api.v0.dev/v1",
      "envKey": "V0_API_KEY"
    }
  }
}

Session Reproduction

export V0_API_KEY="v1:2n................................................3"
codex --provider vercel --model v0-1.0-md --verbose

CLI output

● OpenAI Codex (research preview) v0.1.2505172129
...
POST https://api.v0.dev/v1/chat/completions
→ 401 Unauthorized
⚠️  OpenAI rejected the request. Status: 401, Message: "Unauthorized"

Direct cURL test

curl -s -o /dev/null -w "%{http_code}\n" \
  -H "Authorization: Bearer $V0_API_KEY" \
  -H "Content-Type: application/json" \
  -d '{"model":"v0-1.5-md","messages":[{"role":"user","content":"ping"}]}' \
  https://api.v0.dev/v1/chat/completions
# ⇒ 401

What I’ve tried

  1. Regenerated API key twice and re-exported it.
  2. Confirmed key length (53 chars incl. v1: prefix).
  3. Ensured OPENAI_API_KEY is unset to avoid provider mix-up.
  4. Placed config file inside WSL $HOME (not Windows path).
  5. Tested both v0-1.0-md and v0-1.5-md models.
  6. Verified billing page shows active usage-based billing.

Still receive 401 for every request.


Expected
200 OK and a valid chat completion.

Actual
401 Unauthorized with message "Unauthorized".

Please advise on any missing configuration or backend-side flag that needs to be enabled.
Happy to provide extra logs or run further diagnostics.

Thanks a lot!

There doesn’t appear to be a systemic issue with the keys (low 401 rate and I can successfully follow your steps)

Evidence points to your API key being rejected. The thing to focus on here is the cURL request, since that eliminates anything weird Codex could be doing

You will get a 401 if that $V0_API_KEY variable is undefined, in case your export V0_API_KEY command executed in a different environment and the variable didn’t carry over to when you ran your curl command, so make sure you test them back to back

➜  ~ export V0_API_KEY="v1:……………………………………………………………………………………………O"
➜  ~ curl -s -o /dev/null -w "%{http_code}\n" \                              
  -H "Authorization: Bearer $V0_API_KEY" \
  -H "Content-Type: application/json" \
  -d '{"model":"v0-1.5-md","messages":[{"role":"user","content":"ping"}]}' \
  https://api.v0.dev/v1/chat/completions
200

I’ll check with the v0 team to see if we have any better resources for debugging this

Hi Jacob,

Thanks again for following up — I finally managed to get the Vercel key itself working, but I’ve run into a second, CLI-specific problem that’s preventing me from switching my default provider back to OpenAI.

Where things stand now

Step Result
1. curl with V0_API_KEY :white_check_mark: returns 200 (v0-1.5-md) — so the key & billing are good.
2. codex --provider vercel … after codex auth login vercel :white_check_mark: works (no more 401).
3. codex config set provider openai from the shell :cross_mark: CLI prints its banner, drops into the interactive REPL, then ignores the config command.
4. `codex config show head` :cross_mark: same banner → REPL → unhandled EPIPE (Node v22.16.0).
5. codex config path :white_check_mark: prints the expected path (/home/loai1/.codex/config.yaml).

Manually editing ~/.codex/config.yaml does change the default to OpenAI, but any attempt to use codex config … results in the REPL + EPIPE crash above, so I can’t rely on the built-in config commands.

Environment

  • WSL (Ubuntu 22.04) on Windows 10
  • Node 22.16.0
  • Codex CLI v0.1.2505172129 (installed via npm -g)

Repro

codex auth logout vercel
codex auth login vercel           # paste Vercel v1: key
codex config set provider openai  # ← drops to REPL instead of writing config

Full CLI output (first lines)

● OpenAI Codex (research preview) v0.1.2505172129
localhost session: 22ce660c472d4a20bff2a9475c96abfe
↳ workdir: ~/…/targetwise_2.0
↳ model: v0-1.0-md
↳ provider: vercel
↳ approval: suggest
node:events:496
      throw er; // Unhandled 'error' event
Error: write EPIPE
…

Questions

  1. Is this a known issue with codex config set on v0.1.2505172129 / Node ≥22?
  2. Should I be using a different syntax (or --non-interactive) to modify config outside the REPL?
  3. Is the recommended workaround simply to edit ~/.codex/config.yaml by hand for now?

Appreciate any guidance or a pointer to an open issue/PR if it already exists.

Thanks!

— Luay S

I couldn’t get it to work too. I followed the v0 docs and have the provider vercel added in config.json , and then I have V0_API_KEY in env along side OPENAI_API_KEY (which seems to be the cause of the issue)

curl with V0_API_KEY returns 200 as well

I think it is an upstream issue reported in the codex repo:

and the issue seems to be related to being logged in with openai keys and then trying to switch to other providers.