Hi! I’m currently building an agent builder for business startups, and I’ve run into a limitation that blocks part of my long-term vision: multi-agent support.
From what I can see, multi-agent systems aren’t currently supported in the AI SDK. I wanted to raise this because multi-agent workflows are becoming increasingly important for building more complex AI systems.
For example, Google’s Agent SDK already supports multi-agent architectures:
Build powerful multi-agent systems with Agent Development Kit (ADK)
It would be great to see something similar supported in the AI SDK. Right now, the lack of multi-agent support makes it difficult to implement the architecture I’m aiming for. While I know Vercel offers workflow orchestration, it doesn’t quite cover the same use case or flexibility that true multi-agent systems provide.
I’d love to keep building on top of the AI SDK, but without this capability it may eventually require migrating the project to a different framework—which would be unfortunate given everything already built on top of it.
Is multi-agent support something that might be considered in the roadmap?
Hey @luiscaceresd . I ran into the same problem and ended up building a solution that works alongside AI SDK rather than replacing it.
The approach: AI SDK handles the LLM calls and streaming UI (it’s great at that), and a separate orchestration layer handles the multi-agent coordination – task decomposition, dependency ordering, shared memory between agents.
I put together a working Next.js example that shows the pattern:
Frontend uses useChat from AI SDK for streaming
Backend API route calls runTeam() from open-multi-agent to orchestrate a researcher + writer agent pair
The coordinator automatically breaks the goal into tasks, runs them in dependency order, and synthesizes results
Final output is streamed back via AI SDK’s streamText
The key insight is that these are different layers – AI SDK is the model/streaming layer, orchestration sits above it. You don’t need to choose one or the other.
Caveats: the orchestration phase adds latency (30-60s for coordinator planning + agent execution) before streaming starts, so it’s better suited for async workflows than instant chat.