Workflow help

I have different contexts, and within those contexts I have a few different prompts or generic system prompt that I’d want to use. I know I can make a short call to generate object and then feed that as system prompt, but I don’t know that I like that. I tried providing separate tools which call getobject but A) I get stop reason length which I don’t know what that means and setting max tokens didn’t help B) then I lose streaming. Any tips would be appreciated. Also if there is a best practice to call an ai agent without getting the stop reason length would be great! Thanks in advance

For the “stop reason length”, this means your response hit the token limit before completing. Try increasing maxTokens significantly (like 4000+ depending on your model) or switch to a model with a larger context window.