Skip to content

Major version upgrades #22

@davissp14

Description

@davissp14

Major upgrades

For major releases of PostgreSQL, the internal data storage format is subject to change, thus complicating upgrades. The traditional method for moving data to a new major version is to dump and reload the database, though this can be slow. A faster method is pg_upgrade.

Generally speaking, it’s a bad idea to perform in-place upgrades across major versions. I think we should heavily consider allowing users to restore a snapshot into a new app. I think this is relatively common practice across vendors and would provide a safe passage for users to test the new version against their dataset, client, etc. before fully committing.

I think there are quite a few ways we could do this, but here's a rough example of what this process could look like:

  1. Provision a new app.
  2. Provision and attach a new volume that meets the size constraints specified by the target snapshot.
  3. Provision and attach a second "source" volume containing the restore data.
  4. Issue a pg_dump on the source volume and work to perform the upgrade process outlined here: https://www.postgresql.org/docs/13/upgrading.html
  5. Detach the source volume

It certainly requires some orchestration, so not sure how feasible this process will be in the short-to-medium term.

Reference: https://www.postgresql.org/docs/13/upgrading.html

Metadata

Metadata

Assignees

No one assigned

    Labels

    enhancementNew feature or request

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions