đź§± Introducing Traits.dev - Treating System Prompts as Infrastructure

Hey everyone - Justin here. Nice to virtually meet you all.

A lot of serious agentic applications are being built within the Vercel ecosystem, including with the Vercel AI SDK. It’s an incredibly clean way to wire models into real products.

As I’ve been building and deploying agent-centric systems into production, one thing has increasingly bothered me:

We treat system prompts, arguably the most powerful control surface in an agentic application, like untyped strings (maybe it’s just me).

As agentic software becomes:

  • Customer-facing

  • Embedded into real business workflows

  • Financially or compliance-sensitive

  • Reviewed by security or governance teams

the system prompt is no longer just instructions but something more akin to infrastructure. This is what led me to build traits.dev, perhaps foolishly, and that’s why I’m sharing this amongst the Vercel community.

Where Traits Fits in the Vercel AI SDK Stack

If you’re using the Vercel AI SDK, your flow likely looks something like:

import { streamText } from "ai";

const result = await streamText({
  model,
  system: systemPrompt,
  messages
});

traits.dev focuses on that system surface.

Instead of manually crafting that string, you define a structured, versionable behavior profile that compiles into a deterministic system prompt.

import { defineTraitProfile } from "traits.dev";

export const financialAssistant = defineTraitProfile({
  voice: {
    tone: "professional",
    verbosity: "concise"
  },
  constraints: {
    must_not: [
      "provide personalized financial advice",
      "speculate beyond verified data"
    ]
  },
  policies: {
    truthfulness: "strict",
    citation_required: true
  }
});

const systemPrompt = financialAssistant.compile();

Then:

const result = await streamText({
  model,
  system: systemPrompt,
  messages
});

traits is intended to be additive to the Vercel AI SDK. It hardens one of its most important inputs.

Why I Think This Matters

Right now, most agentic apps:

  • Store prompts in files

  • Modify them ad hoc

  • Mix personality with compliance rules

  • Have no structured way to review behavioral changes

  • Cannot confidently answer, nor audit, “What exactly is our agent allowed to do?” and “What exactly did our agent do?”

That’s fine for prototypes, but as enterprise grade systems become agentic at their core, and serve a wide range of roles in enterprise orgs, I believe teams will need:

  • Deterministic system prompt generation

  • Separation of personality and policy

  • Explicit hard vs soft constraints

  • Versionable behavior definitions

  • A foundation for evaluation and release safety

Traits is a small primitive aimed at that future.

Looking for Honest Feedback

I’m genuinely curious:

  • Does this abstraction resonate if you’re building with the Vercel AI SDK?

  • Are you thinking about prompt governance yet?

  • Does this feel premature or inevitable?

  • What would make it infra-grade in your mind?

:globe_with_meridians: https://traits.dev
:technologist: https://www.github.com/justinhambleton/traits

If this resonates, I’d appreciate a :star: on GitHub.

But more importantly, I’m looking for honest feedback from the Vercel community. I chose the Vercel AI SDK as the first logical integration because so much cool stuff is happening in this community, and with this ecosystem.

Cheers to building, and cheers to the Vercel community!

-Justin
Founder frntr.ai

1 Like