Modify ChatSDK instance to have 2 x LLM responses per user message?

Can GitHub - vercel/ai-chatbot: A full-featured, hackable Next.js AI chatbot built by Vercel be modded to have 2 LLM responses to each user chat input.

Say one response from an openAI mode, and one from Claude Sonnet, with each LLM getting full context of the chat, and without breaking any of the features of the SDK like Data Persistence

  • user
  • openAI (system prompt 1)
  • claude (system prompt 2)
  • user
  • openAI
  • claude
  • etc…

Hey, @alwayshungry! Great username. :smiley: Welcome!

To create a dual-LLM chatbot that responds with both OpenAI and Claude Sonnet for each user message, implement separate API routes for each model (/api/chat/openai and /api/chat/claude), then build a custom chat component that maintains separate message arrays for each model.

This approach preserves all AI SDK features including streaming responses and data persistence while giving users the benefit of comparing outputs from two different AI models in a single conversation flow.

Have you tried using v0.dev to show you this implementation (referencing your own code)?