Question on database connection concurrency handling for Fluid Compute

I was wondering how to handle Prisma connections with Supabase which uses supavisor for its connection pool with Fluid Compute.

My current apps have specific high traffic times and I can handle connections by using ?pgbouncer=true&connection_limit=1&pool_timeout=30, which essentially makes every function use only 1 connection per invocation. This works well as I let Supavisor do the management of the pooling with a connection pool of around 30.

With fluid compute, what I understand is that functions will stay warm for a moment and will be reused.

My code currently only puts prisma as a global variable when running the NextJs dev server and in production it creates a new prisma instance for each function invocation.

With Fluid compute, does that change how I need to manage connections? Should I put prisma as a global just like I do with my development server in order to handle multiple requests coming to my warm function to avoid going consuming too much connections from supavisor’s connection pool?

Moreover, if I put prisma as a global variable, wouldn’t long queries actually make other requests queue up waiting for the connection to be released?

Here is a sample code of my current prisma.ts file

declare global {
  // eslint-disable-next-line no-var
  var prisma: PrismaClient | undefined;
}

const createPrismaClientWithEncryption = () => {
  return new PrismaClient({
    datasources: {
      db: {
        url: process.env.DATABASE_URL,
      },
    },
  }).$extends(fieldEncryptionExtension()) as PrismaClient;
};

export const prisma: PrismaClient = global.prisma || createPrismaClientWithEncryption();

if (!isProd()) { // store prisma in global only for non production environments (aka non-serverless)
  global.prisma = prisma;
}

I’m wondering the same after reading: Connection Pooling with Serverless Functions

Do I have to initiate the Prisma/database connection in the instrumentation.ts file for Next.js in order to make use of the optimization?
Or, how/where can I initiate Prisma so it’s outside the function handler?

Hi @srosato, welcome to the Vercel Community!

Thanks for laying out the details. Your current setup looks good and should work with Fluid without any changes. The Prisma | Supabase Docs docs don’t mention using connection_limit=1, so I’d say try using without it and notice if you see any performance or cost issues.

Otherwise, don’t stress too much about Fluid it should work with the current code you’ve written for serverlesss environment.

Hi @simonknittel, welcome to the Vercel Community!

You should write your application for serverless environment like I shared in the reply above.

Fluid compute is an optimization on top of serverless, so the code should still be serverless, Vercel will use the Fluid compute to optimize the performance and cost on your behalf.