Python AI Response Streaming

I’ve attempted Websockets and SSE Starlette for streaming, which have obvious flaws with running on Vercel.

I have a python backend which uses a FastApi as an endpoint for my next js frontend. I can not seem to figure out how to use streamed responses in Vercel, despite reading the AI SDK info, python streaming info.

Does anyone have a code snippet or framework they can share where they have made this work? My backend is a prettty complex reasoning process and I am simply streaming the final output. Works amazing locally, but the streaming failed during my iterations initially for function invocation which was due to my memory approach (I have fixed that). Now, I simply can’t stream. 500 errors which when investigated appear to be that I’m not streaming correctly from the backend.

I am using a pydantic framework, but the error is 100% from the approach I’m using for streaming the final response in the vercel hosted setup.

Hello,

You can try https://v0.dev/ or our templates AI SDK Python Streaming

1 Like

This topic was automatically closed 7 days after the last reply. New replies are no longer allowed.