Getting 500 Internal Server Error FastAPI

Hi all, I am developing a simple Agentic AI App using following Stack

Nextjs → Frontend
FastAPI → Backend
Clerk - Authentication
I am seeing 500 Internal Server error when its fastapi endpoint is being accessed by the frontend

Currently receiving 500 error when accessing the fastapi endpoint on vercel. Expecting a success/200 response

**Build Logs**
 
00:06:06.286 Running build in Washington, D.C., USA (East) – iad1
00:06:06.287 Build machine configuration: 2 cores, 8 GB
00:06:06.422 Retrieving list of deployment files...
00:06:06.900 Downloading 28 deployment files...
00:06:07.675 Restored build cache from previous deployment (Aasp2edaCZy3FmQxfbpohHctPaA5)
00:06:08.196 Running "vercel build"
00:06:08.782 Vercel CLI 48.10.3
00:06:09.130 Installing dependencies...
00:06:10.433 
00:06:10.433 up to date in 1s
00:06:10.434 
00:06:10.434 229 packages are looking for funding
00:06:10.434   run `npm fund` for details
00:06:10.462 Detected Next.js version: 15.5.6
00:06:10.467 Running "npm run build"
00:06:10.653 
00:06:10.653 > saas@0.1.0 build
00:06:10.654 > next build
00:06:10.654 
00:06:11.792    ▲ Next.js 15.5.6
00:06:11.793 
00:06:11.862    Linting and checking validity of types ...
00:06:17.067    Creating an optimized production build ...
00:06:21.553  ✓ Compiled successfully in 1960ms
00:06:21.555    Collecting page data ...
00:06:22.930    Generating static pages (0/5) ...
00:06:23.820    Generating static pages (1/5) 
00:06:23.821    Generating static pages (2/5) 
00:06:23.821    Generating static pages (3/5) 
00:06:23.821  ✓ Generating static pages (5/5)
00:06:24.282    Finalizing page optimization ...
00:06:24.285    Collecting build traces ...
00:06:32.012 
00:06:32.015 Route (pages)                                 Size  First Load JS
00:06:32.015 ┌ ○ / (549 ms)                             3.08 kB         146 kB
00:06:32.016 ├   /_app                                      0 B         142 kB
00:06:32.016 ├ ○ /404                                     180 B         143 kB
00:06:32.016 ├ ○ /product (547 ms)                      45.4 kB         188 kB
00:06:32.016 └ ○ /sign-in/[[...index]] (546 ms)         1.51 kB         144 kB
00:06:32.016 + First Load JS shared by all               147 kB
00:06:32.016   ├ chunks/framework-acd67e14855de5a2.js   57.7 kB
00:06:32.016   ├ chunks/main-62a5bcb5d940e2e2.js        36.8 kB
00:06:32.016   ├ chunks/pages/_app-36c145dffa6388ad.js  46.3 kB
00:06:32.016   └ other shared chunks (total)            5.86 kB
00:06:32.017 
00:06:32.017 ○  (Static)  prerendered as static content
00:06:32.017 
00:06:32.069 Collected static files (public/, static/, .next/static): 3.923ms
00:06:33.753 Using Python 3.12 from pyproject.toml
00:06:35.062 Installing required dependencies from uv.lock...
00:06:35.063 Using uv at "/usr/local/bin/uv"
00:06:36.434 Build Completed in /vercel/output [27s]
00:06:36.662 Deploying outputs...
00:06:46.177 Deployment completed
00:06:46.834 Creating build cache...
00:07:00.688 Created build cache: 13.855s
00:07:00.690 Uploading build cache [170.82 MB]
00:07:02.773 Build cache uploaded: 2.083s

Runtime logs

Python process exited with exit status: 1. The logs above can help with debugging the issue.

api/index.py code

import os
from fastapi import FastAPI, Depends
from fastapi.responses import StreamingResponse
from fastapi.middleware.cors import CORSMiddleware
from fastapi_clerk_auth import ClerkConfig, ClerkHTTPBearer, HTTPAuthorizationCredentials
from openai import OpenAI

print("[DEBUG] Starting api/index.py import...")

app = FastAPI()

# Add CORS middleware
app.add_middleware(
    CORSMiddleware,
    allow_origins=["*"],
    allow_credentials=True,
    allow_methods=["*"],
    allow_headers=["*"],
)

print("[DEBUG] FastAPI app and CORS configured")

# Configure Clerk authentication
clerk_config = ClerkConfig(jwks_url=os.getenv("CLERK_JWKS_URL"))
clerk_guard = ClerkHTTPBearer(clerk_config)

print(f"[DEBUG] Clerk configured with JWKS URL: {os.getenv('CLERK_JWKS_URL')}")
print(f"[DEBUG] OpenAI API key present: {bool(os.getenv('OPENAI_API_KEY'))}")


@app.get("/api/health")
@app.get("/health")
async def health_check():
    """Health check endpoint that doesn't require authentication"""
    return {"status": "healthy", "service": "api"}


@app.get("/api/idea")
@app.get("/idea")
def generate_idea(creds: HTTPAuthorizationCredentials = Depends(clerk_guard)):
    """
    Generate a business idea using OpenAI GPT-4o-mini with streaming response.
    Requires Clerk authentication via Authorization header.
    """
    user_id = creds.decoded["sub"]  # User ID from JWT
    print(f"[INFO] Generating idea for user: {user_id}")

    client = OpenAI(api_key=os.getenv("OPENAI_API_KEY"))
    prompt = [
        {
            "role": "user",
            "content": "Reply with a new business idea for AI Agents, formatted with headings, sub-headings and bullet points"
        }
    ]

    stream = client.chat.completions.create(
        model="gpt-4o-mini",
        messages=prompt,
        max_tokens=500,
        temperature=0.7,
        stream=True
    )

    def event_stream():
        try:
            for chunk in stream:
                if chunk.choices[0].delta.content is not None:
                    text = chunk.choices[0].delta.content
                    # Escape newlines for SSE format
                    safe_text = text.replace('\n', '\\n')
                    yield f"data: {safe_text}\n\n"
            yield "data: [DONE]\n\n"
        except Exception as e:
            print(f"[ERROR] Stream error: {e}")
            yield f"data: [ERROR] {str(e)}\n\n"

    return StreamingResponse(
        event_stream(),
        media_type="text/event-stream",
        headers={
            "Cache-Control": "no-cache",
            "X-Accel-Buffering": "no"
        }
    )


@app.get("/api/user")
@app.get("/user")
def get_user_info(creds: HTTPAuthorizationCredentials = Depends(clerk_guard)):
    """Get information about the authenticated user."""
    return {
        "user_id": creds.decoded.get("sub"),
        "email": creds.decoded.get("email"),
        "claims": creds.decoded
    }


# Vercel serverless function handler
handler = app

print("[DEBUG] Module initialization complete")

Current project structure

Project structure

aas/
├── .env.local
├── .gitignore
├── README.md
├── eslint.config.mjs
├── next-env.d.ts
├── next.config.ts
├── package-lock.json
├── package.json
├── postcss.config.mjs
├── pyproject.toml
├── requirements.txt
├── run-api.sh
├── runtime.txt
├── tsconfig.json
├── tsconfig.tsbuildinfo
├── uv.lock
├── vercel.json
│
├── api/
│   ├── __pycache__/
│   │   └── index.cpython-314.pyc
│   └── index.py
│
├── pages/
│   ├── _app.tsx
│   ├── index.tsx
│   ├── product.tsx
│   │
│   ├── api/
│   │   └── _debug/
│   │
│   └── sign-in/
│       └── [[...index]].tsx
│
├── public/
│   ├── favicon.ico
│   ├── file.svg
│   ├── globe.svg
│   ├── next.svg
│   ├── vercel.svg
│   └── window.svg
│
└── styles/
    └── globals.css

There’s another community post with 404 debugging tips that might be helpful. Please give these solutions a try and let us know how it goes.

A human should be around soon to offer more advice. But you can also get helpful information quickly by asking v0.

I tried using the python source you posted and it seems that the handler variable might be the problem here. If you are using a WSGI or ASGI framework like fastapi then you don’t want the handler variable. Vercel interprets handler as a BaseHTTPRequestHandler value. There is a runtime check that asserts that hander is a subclass of BaseHTTPRequestHandler. If this check fails then the API call fails with status 500. The relevant blerb in the docs is here.