Flux v2 shared
Flux v2 runs many tenants on a shared PostgreSQL cluster and PostgREST pool while keeping logical isolation at the schema and database role level. A Node gateway is the public ingress: it resolves the host to a tenant, validates external JWTs, and mints bridge JWTs for PostgREST.
What you will learn
- Hard invariants (see full v2 specification)
- Soft isolation tradeoffs on Free/Pro
- Why gateway correctness is critical
The idea
v2 exists to reduce container sprawl and memory overhead versus v1 dedicated. It does not remove Postgres or PostgREST from the picture—it changes how they are shared.
Authoritative spec: Flux v2 architecture specification — single canonical document in /docs (no duplicate prose path).
Invariants (summary)
tenant_idis immutable; slug is UI-only.- Schema/role names derive from
tenant_idvia a deterministic short id—slug is never embedded in schema identifiers. - Only the gateway issues runtime JWTs for tenant API traffic (for this path).
- PostgREST is not publicly reachable without passing gateway controls in the target topology.
- Do not enumerate all tenant schemas in
PGRST_DB_SCHEMAS; access is via grants +search_path+ JWT role.
How it works
Client → Service URL → Gateway → Bridge JWT → PostgREST pool → tenant schemaOperational controls (rate limits, connection limits, timeouts) mitigate noisy neighbors but do not create dedicated hardware isolation.
Example
When you see connection spikes on the shared cluster, you scale or split clusters operationally—product docs should not promise per-tenant CPU pinning on the pooled tier unless explicitly offered.