Skip to main content
  1. Posts/

Proxmox Backup Server: The Unsung Hero

·1656 words·8 mins·
Arnab
Author
Arnab
Sharing thoughts and experiments from my digital workshop

In my previous post, I shared how Proxmox VE completely transformed my homelab. But I barely scratched the surface of what makes Proxmox truly shine: Proxmox Backup Server, or PBS. Today, I want to dive deeper into how PBS, combined with it’s native S3 integration, turned my backup strategy from “hope for the best” to “bring it on.”

The Backup Nightmare
#

If you’ve been in the self-hosting community long enough, you’ve heard the horror stories. Someone’s server dies, and they realize their backups were either non-existent, corrupted, or stored on the same machine that just failed. The 3-2-1 rule (three copies, two different media, one offsite) sounds simple until you actually try to implement it on a shoestring budget. Before PBS, my backup strategy was a patchwork of shell scripts, Kopia uploads to Backblaze B2, and a lot of fingers crossed. It worked, sure. But recovering from a disaster meant reinstalling Docker, recreating folder structures, downloading terabytes of data from the cloud, and praying that docker-compose up didn’t throw permission errors at 2 AM. It was hectic, error-prone, and frankly, stressful.

PBS: Not Just Another Backup Tool
#

What sets PBS apart from tools like VZDump or BorgBackup is its architecture. PBS doesn’t just copy your VM disks, it understands them. Every backup is broken into 4MB chunks, each identified by a SHA-256 hash. This means if you back up a 100GB VM daily but only 5% changes, PBS only transfers and stores those changed chunks. Deduplication ratios of 5:1 to 20:1 are common, and some homelabbers report ratios as high as 64:1 with frequent backups.

When I first saw all my backups showing the same size in the PBS UI, I panicked. Turns out, that’s by design. The UI shows the “protected size” (total data represented), not the actual storage consumed. My 100GB VM with 5% daily changes only uses about 5GB of actual storage thanks to deduplication. Mind blown. The magic doesn’t stop there. PBS uses dirty bitmaps on the Proxmox VE side to track exactly which disk blocks changed since the last backup. When a backup runs, the client reads only those changed blocks, chunks them, computes their hashes, and sends only the new chunks to PBS. The result? Backups that complete in minutes instead of hours, even for large VMs.

The Native B2 Integration
#

When Proxmox released PBS 4.0 in August 2025, they added native S3-compatible storage support as a “tech preview.” This was the missing piece. Suddenly, I could point PBS directly at Backblaze B2 and have it handle the sync automatically. No more kludgy rclone scripts or manual uploads. Setting it up was surprisingly straightforward:

  1. Create a B2 bucket in Backblaze’s console
  2. Generate an Application Key with read/write permissions to that bucket
  3. Add the S3 endpoint in PBS using your B2 region’s URL (something like s3.us-west-000.backblazeb2.com)
  4. Create a new datastore backed by that S3 endpoint

The critical thing to remember: You need a local cache partition. PBS recommends 64–128GB dedicated to caching chunks before they’re uploaded to S3. If you skip this and point the cache at your root disk, you’ll fill it up fast and crash your server.

One thing nobody warned me about: verification jobs on S3 datastores can get expensive. My Backblaze bill jumped from $5 to $25/month because PBS was downloading every chunk to verify integrity. Now I run verification once or twice a month instead of nightly, and my bill dropped back to sanity.

The LXC Debate
#

I know, I know. The Proxmox forums are full of people screaming that PBS should run on separate hardware, and they’re not wrong. In an ideal world, your backup server should survive independently of your production server. If your PVE host dies and your PBS is a VM on that same host, you’ve lost both your production environment and your backups simultaneously. But here’s the thing: I’m running a single-node homelab on an HP ProDesk 600 with an i5-7500T and 16GB of RAM. I don’t have a second machine lying around. Running PBS in a privileged LXC container on the same host isn’t perfect, but it’s infinitely better than having no dedicated backup server at all. My setup:

  • PBS in a privileged LXC (unprivileged containers don’t work well with PBS’s disk operations)
  • Separate physical disk passed through to the LXC for the backup datastore (not the same disk as my VM storage)
  • Debian 12 (Bookworm) template because PBS packages aren’t built for Debian 13 yet
  • 4GB RAM, 2 cores allocated to the container The LXC approach has some advantages over a full VM: it shares the host kernel (less overhead), starts faster, and uses fewer resources. For a single-node setup where I’m optimizing every megabyte of RAM, that matters.

Is it ideal? No. Does it serve my purpose? Absolutely. My backups run reliably every night, deduplicate beautifully, and sync to B2 without me touching anything. If my host dies, I can spin up a new PBS on fresh hardware, point it at my B2 bucket, and restore everything. That’s good enough for me.

Garbage Collection
#

PBS has two distinct cleanup mechanisms that work together: pruning and garbage collection. Pruning is the logical side. You define retention policies to keep the last 7 daily backups, 4 weekly backups, 3 monthly backups, whatever makes sense for your workload. When a backup falls outside these retention windows, PBS marks it for deletion. But it doesn’t actually delete anything yet. Garbage collection is the physical side. It runs on a schedule (I do it weekly), scans for chunks that are no longer referenced by any backup snapshot, and deletes them from disk. This is where the deduplication magic pays off. If you have 50 backups all referencing the same chunk, GC won’t touch that chunk until all 50 snapshots are pruned. The key insight: prune and GC are separate operations. You can prune without running GC (chunks stay on disk but aren’t referenced), and running GC without pruning does nothing (no orphaned chunks to clean up). Both need to happen for effective storage management. My retention policy:

  • keep-daily = 14 (two weeks of daily snapshots)
  • keep-weekly = 8 (two months of weekly snapshots)
  • keep-monthly = 6 (six months of monthly snapshots) This gives me granular recovery for recent changes and long-term archival without drowning in storage costs.

Disaster Recovery
#

Last month, I simulated a complete hardware failure. Here’s what I did:

  1. Installed Proxmox VE on fresh hardware (took about 15 minutes)
  2. Spun up a new PBS LXC using the same Debian 12 template
  3. Connected to my Backblaze B2 bucket using the same S3 credentials
  4. Waited for PBS to index the existing backups (this took a few hours for ~200GB)
  5. Restored my core VM from the latest snapshot The entire process, from bare metal to running services, took under two hours. Compare that to the old way of reinstalling Debian, Docker, downloading from B2, recreating configurations, debugging permission errors, which would have taken an entire weekend and a lot of caffeine. The key realization: your backup server’s configuration is also backed up. When I pointed the new PBS at my B2 bucket, it automatically discovered all my existing backups, retention policies, and verification history. I didn’t need to reconfigure anything. The backups were just there, waiting to be restored.

The Sync Job
#

Every night at 2 AM, my PBS runs a sync job that pushes my local backup datastore to Backblaze B2. The job is configured as a “pull” sync from PBS’s perspective which means my local PBS initiates the connection to B2, not the other way around. This design has a subtle security benefit: if someone compromises my local network, they can’t reach my offsite backups because the connection is outbound-only. The B2 bucket has no inbound access. It’s just sitting there, waiting for my PBS to push data to it. The sync is incremental, just like the backups themselves. Only new or changed chunks get transferred. My nightly sync typically transfers 2–5GB even though my total backup size is over 200GB. On my 100Mbps upload connection, that completes in about 10 minutes.

What I’d Do Differently
#

If I were starting over with unlimited budget, I’d run PBS on dedicated hardware like a mini PC or a RaspberryPi, which is silent, power-efficient, and more than capable of running PBS for a homelab. I’d also consider a second PBS at a friend’s house for true geographic redundancy. But for a single-node homelab on a budget? My current setup works remarkably well:

  • PBS in LXC on the same host (fast local restores)
  • Separate physical disk for backup storage (survives single-disk failure)
  • Nightly sync to Backblaze B2 (offsite protection)
  • Weekly garbage collection (keeps storage tidy)
  • Bi-weekly verification (ensures backup integrity without breaking the bank)

###Enterprise Backup on a Homelab Budget

The combination of PBS’s incremental, deduplicated backups with native B2 integration gives me something I never had with bare-metal Docker: confidence. I know that if my server dies tomorrow, I can have everything running again in under two hours. I know that my backups are verified, deduplicated, and stored safely offsite. I know that the whole system runs itself and I haven’t touched my backup configuration in months. Is running PBS on the same host as PVE ideal? No. But it’s a pragmatic solution for single-node homelabs that takes 90% of the risk off the table. The remaining 10%, the catastrophic scenario where both your host and your backup server die simultaneously, is mitigated by the B2 sync. If that happens, you’re rebuilding from cloud backups, which is still a hundred times better than rebuilding from scratch. If you’re still relying on manual backups or shell scripts, do yourself a favor and set up PBS. The incremental backups alone will save you hours of transfer time, and the native B2 integration means you can implement proper offsite protection without breaking the bank. Your future self will thank you.


Related