Move from pooled to dedicated (v2 → v1)
Suppose you built Bloom Atelier on v2 shared (pooled Postgres and PostgREST behind the Flux gateway) and you are ready for v1 dedicated (your own Postgres and PostgREST containers). Flux can orchestrate that move with flux migrate.
This page is about changing execution engine for an existing project. It is not about SQL schema files—those stay in Git and remain the source of truth; see Migrations workflow.
What you will learn
- How
flux migraterelates to the control plane versus your app’s Service URL - A safe order: dry run → optional dump-only → staged or full migrate
- A success checklist after a full cutover (CLI, dashboard, Docker, app env)
- Why errors mention
pg_dumpon the server, not on your laptop
The idea
v2 shared and v1 dedicated are both real PostgreSQL-backed stacks; the difference is where data lives and how strong the isolation boundary is. In practice, v2 shared is often where people first learn Flux—free tier, shared cluster, and neighbor load can mean higher latency or tighter limits than a dedicated stack. None of that makes pooled mode a toy; it is a different deployment model.
The v1 / v2 labels name execution strategies (dedicated containers versus pooled cluster), not a universal “newer is better” ranking. Choose dedicated when isolation, compliance posture, or predictable capacity outweigh the operational simplicity of shared infrastructure.
Teams often start on pooled infrastructure, then move a project to dedicated when policy, risk, or operations call for a container-level boundary. That is a product choice—not a statement that pooled mode is “fake.”
flux migrate talks to the control plane API (…/api/cli/v1/migrate), not to your tenant Service URL (https://api--<slug>--<hash>.…). Your browser and server components call the Service URL for rows; the CLI calls the dashboard API to provision, dump, and flip projects.mode.
So you need:
- A working CLI login to the same control plane that owns the project.
- For hosted deployments, the CLI can infer
FLUX_API_BASEfromFLUX_URL(orNEXT_PUBLIC_FLUX_URL) when those point at a*.vsl-base.comtenant host—see Installation and Environment variables. Self-hosted still needs an explicitFLUX_API_BASEif your Service URL is on a custom domain. - The project slug and seven-character hash from
flux list(or the dashboard), passed on every command as--project <slug>and--hash <hex>. A repo-rootflux.jsonwith the sameslugandhashis optional—it only saves repeating those flags for other CLI commands likeflux push;flux migratedoes not require it. See Configuration.
The control plane runs pg_dump against the shared cluster during migration. That binary must exist on the host that runs the dashboard, not merely on your laptop.
Before you start
- Confirm the project is
v2_shared(flux listor the dashboard).flux migraterefuses other modes. - Run
flux loginsuccessfully against the intended control plane. - Commit or back up anything you care about; a full migrate expects downtime while containers are reprovisioned and data is restored.
- Read Pooled vs dedicated so expectations on isolation and URLs stay aligned.
Step 1 — Plan without changing anything
Run from any directory (the examples use -p / --hash explicitly):
flux migrate -p bloom-atelier --hash 61d9dff --to v1_dedicated --dry-runUse your real slug and hash from flux list. If you keep a flux.json for flux push, you may omit those flags when your shell current working directory is that repo—migrate still validates that the hash’s slug matches -p when both are present. Inspect the printed plan and preflight (schemas, table counts, etc.). Fix surprises before you add --yes.
Step 2 — Optional: dump-only (still no engine flip)
This asks the control plane to run pg_dump for the tenant schema and write a file on the control plane host. It does not switch the project to dedicated by itself.
flux migrate -p bloom-atelier --hash 61d9dff --to v1_dedicated --dump-only --yesUse this to validate connectivity and tooling (pg_dump on the server, shared DB URL) before you accept downtime.
Step 3 — Staged migrate (data on dedicated, catalog still pooled)
--staged provisions the dedicated stack and restores from the dump, but does not flip projects.mode to v1_dedicated yet. Use it when you want to inspect the dedicated database before the public cutover.
flux migrate -p bloom-atelier --hash 61d9dff --to v1_dedicated --staged --yesDo not combine --staged with --new-jwt-secret; the catalog secret would no longer match the new stack.
Step 4 — Full migrate (cut over to v1_dedicated)
When you are ready for the catalog to record dedicated and for traffic expectations to follow:
flux migrate -p bloom-atelier --hash 61d9dff --to v1_dedicated --yesBy default the control plane enters gateway maintenance for the tenant while work is in flight (omit --no-lock-writes unless you understand the risk of writes during the move).
After success, flux list should show the project as v1_dedicated and the Service URL shape your deployment documents (flattened host is the usual external contract).
Optional flags (check flux migrate --help)
| Flag | Meaning |
|---|---|
--new-jwt-secret | Rotate jwt_secret on cutover; update every client that mints JWTs for PostgREST. |
--drop-source-after | After a non-staged success, remove the tenant from the shared cluster—destructive; only when you are sure dedicated is authoritative. |
After a successful full migrate
When flux migrate … --yes (without --staged) returns "message": "Migration complete. Project is now v1_dedicated.", walk this checklist before you call the cutover done.
If you only ran --staged, the catalog is still v2_shared until you run that final full migrate—treat the staged success message as confirmation that the dedicated database is populated, not that public traffic or flux list mode has flipped yet.
Success checklist
- CLI — Run
flux list: the project row should showv1_dedicatedand the Service URL you expect (often the same flattenedhttps://api--<slug>--<hash>.…pattern when slug and hash are unchanged). This page’sflux migratepath applies only while the project isv2_shared; after cutover, use dedicated-day-to-day commands (flux push, repair, lifecycle) as for anyv1_dedicatedproject. - Dashboard — Open Projects: the card should show Online / Healthy (or your fleet labels), the API URL field should match
flux list, and v1-style lifecycle actions (for example Stop) should appear where your host enables them. Pooled-only affordances disappear because the project is no longer on the shared engine. - Docker host — On the machine that runs tenant containers,
docker psshould list this project’s Postgres and PostgREST pair (names likeflux-<hash>-<slug>-dbandflux-<hash>-<slug>-api). That confirms dedicated provisioning, not only a catalog flip. - Optional cleanup — If you intentionally used
--drop-source-afteron a non-staged run, confirm the tenant schema is gone from the shared cluster so you are not paying for two copies of truth. If you did not use that flag, the v2 copy may still exist until operators remove it under your own policy.
Update your app (Bloom or any project)
- Refresh
NEXT_PUBLIC_FLUX_URL/FLUX_URL(and any server-only base URL) fromflux listor the dashboard if the Service URL or routing identity changed. - If you rotated secrets (
--new-jwt-secret), runflux project credentials(or the dashboard) and paste the newFLUX_GATEWAY_JWT_SECRET(or equivalent) into your env files. With defaultpreserveJwtSecretbehavior, secrets often stay the same—still verify. - Dedicated stacks expose your tenant API schema as provisioned; if you previously targeted
t_<shortId>_apionly on v2, re-read Service URLs and your RLS **GRANT**s—RLS without grants still yields42501(Troubleshooting → 42501).
Re-run your smoke tests (curl or app E2E) before you announce cutover.
Troubleshooting
Errors below split hosted Flux (you use flux.vsl-base.com or another vendor-run control plane) from self-hosted (you run the dashboard API and Docker host yourself). See Production hardening for operator-focused context.
These are engine-migration-specific failures — they happen during flux migrate and not in normal application traffic. For everyday request-time symptoms (401, 403, 42501, empty arrays), the canonical home is Troubleshooting.
pg_dump not found (control plane)
Migrate runs pg_dump on the server that serves the control-plane API, not on your laptop.
- Hosted: This is a platform packaging issue. Confirm
FLUX_API_BASEpoints at the real dashboard API origin, then contact Flux support or watch the vendor status channel—you cannot install tools into the hosted control plane from your app repo. - Self-hosted operators: Install PostgreSQL client binaries (
pg_dump) in the dashboard/control-plane runtime (the same environment that executes/api/cli/v1/migrate), then rebuild and redeploy that service so a fresh container includes them. Restarting an old image without rebuilding will not addpg_dump.
invalid command \restrict during restore
Newer pg_dump output can include psql meta-commands \\restrict / \\unrestrict that older psql inside the tenant Postgres container rejects.
- Hosted: The control plane must ship a release that sanitizes those lines when the dedicated Postgres major version is before 17. If you see this on hosted Flux after you are already on the latest CLI, contact support—it is not something you fix in application SQL.
- Self-hosted operators: Deploy a dashboard build that includes that sanitization on the restore path, then retry
--stagedor full migrate. Optionally alignpg_dump/psqlclient majors with your tenant image if you maintain both images yourself.
role "service_role" does not exist (PostgREST probe)
PostgREST maps the JWT role claim to a database role; dedicated tenant DBs must define service_role like anon / authenticated.
- Hosted: The tenant bootstrap on the control plane must create that role. If the probe fails on hosted Flux, contact support or wait for a platform update, then rerun migrate (often after containers are recreated).
- Self-hosted operators: Ship an updated tenant bootstrap (or run a one-off
CREATE ROLE service_role,GRANT … TO authenticator, and schema grants) on existing volumes that predate the role, then rerun migrate.
Other CLI issues
Request failed/ wrong project: confirmFLUX_API_BASEpoints at your dashboard/apiorigin, not only at the tenant API host.- Slug/hash mismatch: the
-pslug must match the project that owns the--hashrow influx listfor your API token. If you useflux.json, itsslug/hashmust match that same row—otherwise pass explicit-p/--hashand ignore or fix the file.
Next steps
- Pooled vs dedicated
- Migrations workflow (SQL
flux push) - Configuration (
flux.json) - CLI reference