Create OpenAI keeps prompting 400 when using baseURL


import { createOpenAI } from "@ai-sdk/openai";
import { streamText } from "ai";

// 1. 获取你的环境变量
const apiKey = process.env.OPENAI_API_KEY;
const baseUrl = process.env.OPENAI_BASE_URL;

const openai = createOpenAI({
  apiKey: apiKey,
  baseURL: baseUrl,
});

export async function POST(req: Request) {
  const { prompt, systemPrompt } = await req.json();

  console.log("[/api/chat] 请求进入", {
    promptLength: typeof prompt === "string" ? prompt.length : 0,
    hasSystemPrompt: Boolean(systemPrompt),
  });

  const result = await streamText({
    model: openai("claude-3-5-haiku-20241022"),
    messages: [
      {
        role: "system",
        content: systemPrompt ?? "你是一名专业的中文智能理财顾问,回答要通俗易懂、风险提示清晰。",
      },
      {
        role: "user",
        content: prompt,
      },
    ],
  });

  return result.toTextStreamResponse();
}

This is my code, and I am very sure that my apikey and baseURL can be used normally. I used

curl --location --request POST 'baseurl' \
--header 'Content-Type: application/json' \
--header 'Authorization: Bearer apikey ' \
--data-raw '{
    "model": "claude-3-5-haiku-20241022",
    "max_tokens": 512,
    "messages": [
        {
            "role": "user",
            "content": "从上海火车站到陆家嘴怎么走"
        }
    ],
    "temperature": 0,
    "stream": true
}'

But now when using the Vercel AI SDK, it keeps reminding me

Error [AI_APICallError]: Bad Requestat ignore-listed frames {cause: undefined,url: ‘baseurl’,requestBodyValues: [Object],statusCode: 400,responseHeaders: [Object],responseBody: ‘{“message”:“Cannot invoke \“java.util.List.iterator()\” because the return value of \“com.zhongan.ai.gateway.protocol.adapter.llm.entity.Request.getMessages()\” is null”}’,isRetryable: false,data: undefined}

Hi @nextdooruncleliu, welcome to the Vercel Community!

It seems like you’re using Claude models with the OpenAi provider instead you should be using the AI SDK Providers: Anthropic provider.

Also, what is the value of the baseUrl environment variable here?

In any case, you should use the correct model provider combination.

Thank you for your reply. Below is my modified code:


import { createAnthropic } from '@ai-sdk/anthropic';
import { streamText, createGateway } from "ai";

// 1. 获取你的环境变量
const apiKey = process.env.OPENAI_API_KEY;
const baseUrl = process.env.OPENAI_BASE_URL;
console.log(apiKey, baseUrl)
const openai = createAnthropic({
  apiKey: apiKey,
  baseURL: baseUrl,
});

export async function POST(req: Request) {
  const { prompt, systemPrompt } = await req.json();

  console.log("[/api/chat] 请求进入", {
    promptLength: typeof prompt === "string" ? prompt.length : 0,
    hasSystemPrompt: Boolean(systemPrompt),
  });

  const result = await streamText({
    model: openai("claude-3-5-haiku-20241022"),
    messages: [
      {
        role: "system",
        content: systemPrompt ?? "你是一名专业的中文智能理财顾问,回答要通俗易懂、风险提示清晰。",
      },
      {
        role: "user",
        content: [
          {
            type: 'text',
            text: prompt
          }
        ],
      },
    ],
  });

  return result.toTextStreamResponse();
}

My baseURL is an internal service provided by the company, and now it prompts the following error:

Error [AI_APICallError]: Type validation failed: Value: {"id":"2aae7e02-ec0a-4beb-8006-465bd977e69b","model":"claude-3-5-haiku-20241022","choices":[{"index":0,"delta":{"role":"assistant","content":""}}],"object":"chat.completion.chunk"}.
Error message: [
  {
    "code": "invalid_union",
    "errors": [],
    "note": "No matching discriminator",
    "discriminator": "type",
    "path": [
      "type"
    ],
    "message": "Invalid input"
  }
]
    at ignore-listed frames {
  cause: undefined,
  url: 'baseurl',
  requestBodyValues: [Object],
  statusCode: 500,
  responseHeaders: [Object],
  responseBody: '{"name":"AI_TypeValidationError","cause":{"name":"ZodError","message":"[\\n  {\\n    \\"code\\": \\"invalid_union\\",\\n    \\"errors\\": [],\\n    \\"note\\": \\"No matching discriminator\\",\\n    \\"discriminator\\": \\"type\\",\\n    \\"path\\": [\\n      \\"type\\"\\n    ],\\n    \\"message\\": \\"Invalid input\\"\\n  }\\n]"},"value":{"id":"2aae7e02-ec0a-4beb-8006-465bd977e69b","model":"claude-3-5-haiku-20241022","choices":[{"index":0,"delta":{"role":"assistant","content":""}}],"object":"chat.completion.chunk"}}',
  isRetryable: false,
  data: undefined
}

Okay. Is the baseUrl correctly pointing to the Claude API servers?

Yes, our company’s service request is directed to the Claude API server normally. I used the same configuration method in Langchain and it can run normally

Thank you for your support. I am currently using the (Langchain adapter)[Adapters: LangChain] and can now make normal requests

Hi @nextdooruncleliu, thanks for sharing this.

Do you mean that langchain works fine with the same API_KEY and BASE_URL combination but the ai-sdk Anthropic provider doesn’t?

If it’s possible could you share the URL format here?

Absolutely possible, here is the code I implemented:

import { ChatOpenAI } from '@langchain/openai';
const model = new ChatOpenAI({
    model: "claude-3-5-haiku-20241022",
    configuration: {
      baseURL,
      apiKey,
    },
  });
  
  const { messages }: { messages: UIMessage[] } = await req.json();
  const agent = createAgent({
    model,
    tools: [getWeather],
  });

But there is a problem, many of these abilities seem to be unusable, such as DevTools(AI SDK Core: DevTools),I hope you can tell if there is a better way to use it

The type ‘ChatOpenAI’ is missing the following properties of type ‘LanguageModelV3’: specifying Version, provider, modelId, unsupported URLs, and 2 other items.

const openai = new ChatOpenAI({
    model: "claude-3-5-haiku-20241022",
    configuration: {
      baseURL,
      apiKey,
    },
  });
const model = wrapLanguageModel({
  model: openai,
  middleware: devToolsMiddleware(),
});

I can understand that this only accepts objects returned by the API provided by Vercel, and does not support objects returned by Langchain. Is there any way to meet my demands

Hi @nextdooruncleliu, thanks for sharing the additional code here but it’s missing the baseURL format, what is the URL?

The baseURL uses the company’s services, such as: http://xx.xxx.com/devpilot/v1/external/cline/chat/completion

1 Like

Got it. And this endpoint is hosting the claud-3.5 model?

It’s not fixed, I can configure claude, gpt, deepseek, qwen

Hello, I’m not sure if I’ve expressed myself clearly. I’m looking forward to continuing to use Vercel to implement my web application, but I’m stuck here. I hope you can provide me with a solution

Hi @nextdooruncleliu, thanks for clarifying that. Let’s try and debug what could be wrong.

Can you create an API key from the AI gateway section from your Vercel Dashboard and see if using AI gateway with AI SDK works for you? You can follow these docs: AI SDK Providers: Anthropic.

This way we will be sure that your AI SDK code works. After that we can try and narrow down where the issue is.

No need, I have already used it Community Providers: Requesty The solution has solved the problem and can be used normally. This usage method is more in line with my requirements. Thank you for your help. I suggest that more explanation be provided in this part of the content document and written in a more prominent place. Nowadays, many enterprises have their own deployment models, and their demands for this part are still relatively high

1 Like

If I understand correctly, AI SDK works with Requesty for the same code and you are not blocked anymore?

Nowadays, many enterprises have their own deployment models, and their demands for this part are still relatively high

I agree. Our team is working on making it as easy as it can get. This is why the baseUrl is given to make it easy to use self hosted models.

Yes, currently this method can fully meet my requirements. It can run normally when I use the self hosted model. Thank you for your support. Thank you

1 Like