Sedna — Stop sharing sensitive company information with AI providers

We’re building Sedna, the AI governance layer that lets organizations protect against and get a holistic view of unsanctioned AI usage. It operates on the network-layer through one of many deployable network ingest pipelines, allowing for a flexible and rapid internal deployment.

  • View AI usage in real time with historical analytics
  • Redact sensitive information in-flight, before reaching providers
  • Completely block requests that violate policies
  • View and manage compliance with emerging and existing frameworks

We’re running on Vercel, our landing page is using Next.js 16 with Turbopack and cacheComponents on. Our entire monorepository is powered by Bun paired with Turborepo. Our TypeScript components live in the same place as our Python, but those components are managed by uv.

I’m particularly proud of our development environment and developer onboarding experience.

Our predev command is configured to leverage 1Password as the single-source of truth for development environment variables. When bun dev is run, you’re prompted to scan your fingerprint and your environment is brought into sync with the remote source of truth.

This means that new developers on the project run bun install and bun dev to spin up the entire Docker-ized development environment. Not bad for a 30-package monorepository, eh?

2 Likes