Next.js API routes returns 504 (Gateway Timeout) while working locally

The API route should return a response from openAI.chat.completions. , with base URL ‘https://api.deepseek.com

locally, this function (http://localhost:3000/api/generate ) works . It takes about 30 seconds to complete.

But remotely in Vercel , this same function ( ‘https://zork0.vercel.app/api/generate’ ) gives error :
Vercel Runtime Timeout Error: Task timed out after 60 seconds

POST https://zork0.vercel.app/api/generate 504 (Gateway Timeout)

This is api/generate/route.ts:
export async function POST(request: NextRequest){

const openAI= createOpenAIClient();
const body = await request.json();
const jbody= JSON.stringify (body)

try {

const response = await openAI.chat.completions.create({
  model: "deepseek-reasoner",
  store: true,
  messages: [
      {"role": "user" ,"content": jbody}
  ]
});      

Node.js “next”: “^15.2.3”,Vercel Fluid compute enabled..
Vercel Deployment url: https://vercel.com/peterclihotmailcoms-projects/zork0/6NTHw5pk1EQep38UdPBoo2wgtvAB

You’re experiencing a timeout issue with your DeepSeek API integration on Vercel. The problem is likely that your current implementation is waiting for the entire response before returning it, which exceeds Vercel’s 60-second timeout limit for serverless functions.

Fixing Vercel Timeout with AI SDK Streaming

You’re experiencing a timeout issue with your DeepSeek API integration on Vercel. The problem is that your current implementation is waiting for the entire response before returning it, which exceeds Vercel’s 60-second timeout limit for serverless functions.

The Solution: Use AI SDK with Streaming

Instead of using the OpenAI client directly, I recommend using the AI SDK which handles streaming responses properly. This will start sending data back to the client immediately, avoiding the timeout issue.

import { streamText } from 'ai';
import { openai } from '@ai-sdk/openai';

export async function POST(req: Request) {
  const body = await req.json();
  
  const result = streamText({
    model: openai('deepseek-reasoner', {
      baseURL: 'https://api.deepseek.com',
    }),
    prompt: JSON.stringify(body),
  });
  
  return result.toDataStreamResponse();
}

You can also try inngest or similar SDK which should help

Swarnava ,

Are you sure I am exceeding 60 sec timeout ? Locally it takes < 30 seconds to complete. I can provide direct metrics if you want.

I will switch to the Deepseek provider for vercel - hopefully I will not get gateway timeout errors.
(AI SDK Providers: DeepSeek)

Swarnava ,
I rewrote code to use Vercel AI SDK DeepSeek Provider. And I limited my prompt size to about < 50 characters.
But I get same error :
[POST] /api/generate reason=EDGE_FUNCTION_INVOCATION_TIMEOUT, status=504, user_error=true

This code works ok locally, so I assume the problem is that Vercel does not support generateText , right?
I do not want to rewrite my code again to use streaming. Any suggestions?

Perhaps this might help streamText Tool Invocation Failure - #2 by jessyan0913

Swarnava , Switching from generateText() to streamText() is a big code rewrite.

Can you recommend any changes to this code below to fix the 504 error?

type or paste code here
```import { deepseek } from '@ai-sdk/deepseek';   
const result = await generateText({
      model: deepseek('deepseek-reasoner'),  
      prompt: JSON.stringify(body),
    });

Swarnava ,

I fixed the 504 error by adding : export const maxDuration = 40;

(I am still using using generateText )