[▲ Vercel Community](/) · [Categories](/categories) · [Latest](/latest) · [Top](/top) · [Live](/live)

[AI SDK](/c/ai-sdk/62)

# Workflow help

66 views · 0 likes · 3 posts


Tal32123 (@tal32123) · 2025-10-29

I have different contexts, and within those contexts I have a few different prompts or generic system prompt that I'd want to use. I know I can make a short call to generate object and then feed that as system prompt, but I don't know that I like that. I tried providing separate tools which call getobject but A) I get stop reason length which I don't know what that means and setting max tokens didn't help B) then I lose streaming. Any tips would be appreciated. Also if there is a best practice to call an ai agent without getting the stop reason length would be great! Thanks in advance


Pauline P. Narvas (@pawlean) · 2025-10-30

For the “stop reason length”, this means your response hit the token limit before completing. Try increasing maxTokens significantly (like 4000+ depending on your model) or switch to a model with a larger context window.


Pauline P. Narvas (@pawlean) · 2026-01-23

Hey @tal32123! Just checking in to see if you still need help with your workflow and prompts. Have you found a solution, or is there anything specific you’d like to dive deeper into? Excited to help!