Skip to content

Climactic/BackItUp

Folders and files

NameName
Last commit message
Last commit date

Latest commit

Β 

History

12 Commits
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 

Repository files navigation

πŸ“¦ BackItUp

A secure, flexible backup utility built with Bun

Glob patterns β€’ tar.gz compression β€’ Local + S3 storage β€’ Docker volumes β€’ Scheduled backups β€’ Safe cleanup

GitHub Release License Bun

Linux macOS Windows Docker

GitHub Sponsors Ko-fi Discord


✨ Features

  • 🎯 Named Sources β€” Define backup sources with names and reference them in schedules
  • πŸ” Glob Patterns β€” Include/exclude files using patterns (**/*.ts, !**/node_modules/**)
  • 🐳 Docker Volumes β€” Backup Docker volumes with Docker Compose integration
  • ☁️ Dual Storage β€” Store backups locally and/or in S3 (supports R2, MinIO, etc.)
  • ⏰ Scheduled Backups β€” Cron-based scheduling with timezone support and independent retention policies
  • πŸ›‘οΈ Safe Cleanup β€” Multi-layer validation before any deletion (checksums, path verification)
  • βœ… Integrity Verification β€” Verify backups exist and checksums match

πŸš€ Installation

Linux / macOS

# Quick install
curl -fsSL https://raw.githubusercontent.com/climactic/backitup/main/scripts/install.sh | bash

# Install specific version
curl -fsSL https://raw.githubusercontent.com/climactic/backitup/main/scripts/install.sh | bash -s -- --version v1.0.0

# Uninstall
curl -fsSL https://raw.githubusercontent.com/climactic/backitup/main/scripts/install.sh | bash -s -- --uninstall

Windows (PowerShell)

# Quick install
irm https://raw.githubusercontent.com/climactic/backitup/main/scripts/install.ps1 | iex

# Install specific version
$env:BACKITUP_VERSION="v1.0.0"; irm https://raw.githubusercontent.com/climactic/backitup/main/scripts/install.ps1 | iex

# Uninstall
$env:BACKITUP_ACTION="uninstall"; irm https://raw.githubusercontent.com/climactic/backitup/main/scripts/install.ps1 | iex

Docker

docker pull ghcr.io/climactic/backitup:latest

πŸ“₯ Binaries for all platforms available on GitHub Releases


⚑ Quick Start

1. Create a config file (configuration reference):

backitup.config.yaml (click to expand)
version: "1.0"

database:
  path: "./data/backitup.db"

sources:
  app:
    path: "/var/www/myapp"
    patterns:
      - "**/*.ts"
      - "**/*.js"
      - "!**/node_modules/**"

local:
  enabled: true
  path: "./backups"

s3:
  enabled: false
  bucket: "my-backups"
  # region: "us-east-1"
  # endpoint: "http://localhost:9000"  # For S3-compatible services
  # accessKeyId: "key"                 # Or use S3_ACCESS_KEY_ID env var
  # secretAccessKey: "secret"          # Or use S3_SECRET_ACCESS_KEY env var

# Optional: Set default timezone for all schedules
# scheduler:
#   timezone: "America/New_York"

schedules:
  daily:
    cron: "0 2 * * *"        # Daily at 2 AM
    # timezone: "Europe/London"  # Override global timezone
    retention:
      maxCount: 7            # Keep max 7 backups
      maxDays: 14            # Delete after 14 days
backitup.config.json (click to expand)
{
  "version": "1.0",
  "database": {
    "path": "./data/backitup.db"
  },
  "sources": {
    "app": {
      "path": "/var/www/myapp",
      "patterns": ["**/*.ts", "**/*.js", "!**/node_modules/**"]
    }
  },
  "local": {
    "enabled": true,
    "path": "./backups"
  },
  "s3": {
    "enabled": false,
    "bucket": "my-backups"
  },
  "schedules": {
    "daily": {
      "cron": "0 2 * * *",
      "retention": {
        "maxCount": 7,
        "maxDays": 14
      }
    }
  }
}

2. Run:

backitup backup              # Manual backup
backitup start               # Start scheduler daemon
backitup list                # List backups
backitup cleanup             # Clean old backups
backitup verify --all        # Verify integrity

πŸ“– Commands

All commands support -c, --config <path> to specify a config file and -h, --help for detailed usage.

Command Description Docs
backitup backup Create a backup backup.md
backitup start Start scheduler daemon start.md
backitup list List existing backups list.md
backitup cleanup Clean old backups cleanup.md
backitup verify Verify backup integrity verify.md
backitup export-db Export the database file export-db.md
backitup start                    # Start scheduler daemon
backitup start -c /etc/backup.yaml

backitup backup                   # Create backup (interactive)
backitup backup -s daily          # Create backup with schedule tag
backitup backup --dry-run         # Preview what would be backed up
backitup backup --local-only      # Skip S3 upload
backitup backup --volumes-only    # Only backup Docker volumes

backitup cleanup                  # Clean old backups (with confirmation)
backitup cleanup -s daily         # Clean only "daily" tagged backups
backitup cleanup --dry-run        # Preview deletions
backitup cleanup --force          # Skip confirmation

backitup list                     # List all backups
backitup list -s daily -n 10      # Filter by schedule, limit results
backitup list --format json       # Output as JSON or CSV

backitup verify --all             # Verify all backup checksums
backitup verify <backup-id>       # Verify specific backup
backitup verify --all --fix       # Update DB for missing files

backitup export-db ./backup.db    # Export database to file

βš™οΈ Inline Configuration

Override config file settings directly from the command line. Useful for quick backups, scripts, or CI/CD pipelines. See full inline config documentation.

Available Options

Category Option Description
Database --database <path> Database file path
Sources --source <path> Source path to backup (can be repeated)
--pattern <glob> Glob pattern for filtering (can be repeated)
Local Storage --local-path <path> Local storage path
--no-local Disable local storage
S3 Storage --s3-bucket <name> S3 bucket name
--s3-prefix <prefix> S3 key prefix
--s3-region <region> S3 region
--s3-endpoint <url> S3-compatible endpoint URL
--s3-access-key-id <key> S3 access key ID
--s3-secret-access-key <key> S3 secret access key
--no-s3 Disable S3 storage
Retention --retention-count <n> Maximum backups to keep
--retention-days <n> Maximum days to retain backups
Archive --archive-prefix <str> Archive filename prefix
--compression <0-9> Compression level (default: 6)
Safety --verify-before-delete Verify checksums before cleanup
--no-verify-before-delete Skip checksum verification
Docker --docker Enable Docker volume backups
--no-docker Disable Docker volume backups
--docker-volume <name> Docker volume to backup (can be repeated)
--stop-containers Stop containers before volume backup
--no-stop-containers Don't stop containers (default)
--stop-timeout <seconds> Timeout for graceful stop (default: 30)
--restart-retries <n> Retry attempts for restart (default: 3)

Examples

# Quick backup with inline sources
backitup backup -s manual --source /var/www/app --local-path /backups

# Multiple sources with glob patterns
backitup backup -s manual --source /data --source /logs --pattern "**/*.log" --local-path /backups

# Backup directly to S3
backitup backup -s manual --source /app --s3-bucket my-backups --s3-region us-west-2 --no-local

# Backup with S3 credentials inline
backitup backup -s manual --source /data --s3-bucket my-bucket \
  --s3-access-key-id AKIAIOSFODNN7EXAMPLE \
  --s3-secret-access-key wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY

# Start scheduler with inline overrides
backitup start --source /data --local-path /backups --s3-bucket my-bucket

# Override retention and compression
backitup backup -s manual --source /db --retention-count 5 --retention-days 7 --compression 9

# Backup Docker volumes inline
backitup backup -s manual --docker-volume postgres_data --docker-volume redis_data --local-path /backups

Inline options are merged with your config file. This allows you to use a base config and override specific settings as needed.

Config-Free Mode

You can run backitup without a config file by providing the required inline options:

Required options:

  • At least one source: --source or --docker-volume
  • At least one storage: --local-path or --s3-bucket
# Minimal backup without config file
backitup backup -s manual --source /data --local-path /backups

# Backup to S3 without config file
backitup backup -s manual --source /app --s3-bucket my-backups --s3-region us-west-2

# Docker volume backup without config file
backitup backup -s manual --docker-volume postgres_data --local-path /backups

# Full example with multiple options
backitup backup -s manual \
  --source /var/www/app \
  --pattern "**/*.js" --pattern "!**/node_modules/**" \
  --local-path /backups \
  --retention-count 5 \
  --compression 9

If you run without a config file and don't provide sufficient options, backitup will tell you what's missing.


🐳 Docker Volume Backup

BackItUp can backup Docker volumes alongside your files. Each volume is backed up to a separate .tar.gz archive.

Configuration

version: "1.0"

# ... other config ...

docker:
  enabled: true

  # Global container stop settings (optional)
  containerStop:
    stopContainers: true      # Stop containers before backup
    stopTimeout: 30           # Seconds to wait for graceful stop
    restartRetries: 3         # Retry attempts if restart fails
    restartRetryDelay: 1000   # Milliseconds between retries

  volumes:
    # Direct volume name
    - name: postgres_data

    # Volume from Docker Compose service
    - name: db
      type: compose
      composePath: ./docker-compose.yml
      projectName: myapp  # Optional, inferred from directory

    # Per-volume container stop override
    - name: redis_data
      containerStop:
        stopContainers: false  # Override: don't stop for this volume

How It Works

  1. BackItUp uses a temporary Alpine container to create the backup
  2. The volume is mounted read-only to ensure data safety
  3. If stopContainers is enabled:
    • Containers using the volume are gracefully stopped before backup
    • After backup completes, containers are automatically restarted
    • Failed restarts are retried according to restartRetries setting
  4. If a volume is in use by running containers without stopping, backup proceeds with a warning
  5. Each volume produces a separate archive: backitup-volume-{name}-{schedule}-{timestamp}.tar.gz

Container Stop/Restart

For data consistency (especially with databases), you can stop containers before backing up their volumes:

# Stop containers before backup (CLI)
backitup backup -s daily --docker-volume postgres_data --stop-containers --local-path /backups

# With custom timeout and retries
backitup backup -s daily --docker-volume postgres_data \
  --stop-containers --stop-timeout 60 --restart-retries 5 \
  --local-path /backups

Important notes:

  • Containers with restart: always or restart: unless-stopped policies may auto-restart after being stopped. BackItUp detects this and logs a warning.
  • If a container fails to restart after all retries, the backup still succeeds but a warning is logged.
  • Per-volume settings override global settings, allowing fine-grained control.

Docker Compose Integration

When using type: compose, BackItUp resolves the actual Docker volume name:

# docker-compose.yml
services:
  db:
    image: postgres:16
    volumes:
      - db_data:/var/lib/postgresql/data

volumes:
  db_data:
# backitup.config.yaml
docker:
  enabled: true
  volumes:
    - name: db_data                    # Direct: uses "db_data"
    - name: db                         # Compose: resolves to "myapp_db_data"
      type: compose
      composePath: ./docker-compose.yml

Restoring Volume Backups

# Extract the volume backup
tar -xzf backitup-volume-postgres_data-daily-2024-01-15T14-30-22-123Z.tar.gz -C /tmp/restore

# Create a new volume and restore
docker volume create postgres_data_restored
docker run --rm -v postgres_data_restored:/data -v /tmp/restore:/backup alpine \
  sh -c "cp -a /backup/. /data/"

🐳 Running as a Service

Docker

The Docker image uses three volume mount points:

Mount Point Purpose
/config Config file (backitup.config.yaml) and database
/data Source files to backup (or mount your own paths)
/backups Local backup storage destination
docker run -d --name backitup \
  -v ./config:/config \
  -v ./data:/data:ro \
  -v ./backups:/backups \
  -e S3_ACCESS_KEY_ID=key \
  -e S3_SECRET_ACCESS_KEY=secret \
  ghcr.io/climactic/backitup:latest start

Your config file should reference these paths:

# /config/backitup.config.yaml
version: "1.0"
database:
  path: /config/backitup.db
sources:
  app:
    path: /data
local:
  enabled: true
  path: /backups

To backup Docker volumes from within a container, mount the Docker socket:

docker run -d --name backitup \
  -v /var/run/docker.sock:/var/run/docker.sock \
  -v ./config:/config \
  -v ./data:/data:ro \
  -v ./backups:/backups \
  ghcr.io/climactic/backitup:latest start

You can also mount specific host paths instead of using /data:

docker run -d --name backitup \
  -v ./config:/config \
  -v /var/www/myapp:/myapp:ro \
  -v /var/lib/postgres:/postgres:ro \
  -v ./backups:/backups \
  ghcr.io/climactic/backitup:latest start

Docker Compose

services:
  backitup:
    image: ghcr.io/climactic/backitup:latest
    command: start
    user: root  # Required for reading files with restrictive permissions
    volumes:
      - ./config:/config              # Config + database
      - ./data:/data:ro               # Source files (read-only)
      - ./backups:/backups            # Local backup destination
      - /var/run/docker.sock:/var/run/docker.sock  # For volume backups
    environment:
      - S3_ACCESS_KEY_ID=key
      - S3_SECRET_ACCESS_KEY=secret
    restart: unless-stopped

File Permissions

The Docker image runs as a non-root user (UID 1000) by default. If you're backing up files with restrictive permissions (e.g., SSH keys, database files), you have two options:

Tip: Always mount source data directories as read-only (:ro) to prevent accidental modifications during backup.

Option 1: Run as root (simplest)

services:
  backitup:
    image: ghcr.io/climactic/backitup:latest
    user: root
    # ...

Option 2: Match the host user

If your files are owned by a specific user, run the container as that user:

services:
  backitup:
    image: ghcr.io/climactic/backitup:latest
    user: "1000:1000"  # Match the UID:GID of file owner
    # ...

One-time vs Scheduled Backups

Use command: start for the long-running scheduler daemon with restart: unless-stopped.

For one-time backups (e.g., triggered by cron or CI), use restart: on-failure to prevent the container from restarting indefinitely after a successful backup:

services:
  backitup:
    image: ghcr.io/climactic/backitup:latest
    command: backup -s manual
    user: root
    volumes:
      - ./config:/config
      - ./data:/data:ro
      - ./backups:/backups
    restart: on-failure  # Don't restart after successful backup

systemd

# /etc/systemd/system/backitup.service
[Unit]
Description=backitup scheduler
After=network.target

[Service]
Type=simple
ExecStart=/usr/local/bin/backitup start
Restart=always

[Install]
WantedBy=multi-user.target

πŸ“š Documentation

Document Description
Configuration Reference Complete config file reference
Inline Configuration CLI options and config-free mode
Commands
backup Create backups
start Run scheduler daemon
list List existing backups
cleanup Remove old backups
verify Verify backup integrity
export-db Export the database file

πŸ› οΈ Development

bun install && bun run dev   # Development with hot reload
bun test                     # Run tests
bun run build                # Build standalone executable

πŸ’– Support

If you find BackItUp useful, consider supporting its development:

GitHub Sponsors Β  Ko-fi

Your support helps maintain and improve this project! ⭐

πŸ† Title Sponsors

Title sponsors get their logo showcased here and in the project documentation. Become a title sponsor β†’


πŸ“œ License

License

About

Enhanced Backup Tool for Files, Folders, and Docker Volumes

Resources

License

Stars

Watchers

Forks

Sponsor this project

  •  

Packages