A nice long weekend (in the US at least) is a great time to deal with your Backups. You do have backups right? They work right?
For a number of years now I have been using rdiff-backup for my backups. Unfortunately, a week or so before flock my backups started erroring out and I put off looking into it for various reasons until now. rdiff-backup has a lot going for it, but these days it also has a lot against it:
- It’s in python2 with no python3 support, and as we all know, python2 is going away before too long.
- There’s not really any upstream development. A few years ago development was handed off to new team, but they haven’t really been very active either.
- No encryption/compression support
- slow, especially when you have a lot of snapshots.
So, I decided it was time to look over the current crop of backup programs and see what was available. My critera, which may be very different from yours:
- Packaged in Fedora (bonus points for epel also. I don’t have any RHEL/CentOS boxes at home, but I’d like to be able to use whatever I find in Fedora Infrastructure too, where we are also using rdiff-backup).
- Encryption/Compression support.
- Active and responsive upstream
I went and looked at: BackupPC, zbackup, borgbackup, burp, bacula, obnam, amanda, restic, duplicy, and bup. (and possibly others I didn’t remember). From those I narrowed things down to restic and borgbackup. Both have very active upstreams are packaged in Fedora, but retic doesn’t have compression support yet (its written in go and they are waiting for a native implementation). Also restic isn’t (yet) packaged in EPEL.
So, with that I took a closer look at borgbackup. Upstream is quite active, there’s various encryption and compression support. You can store an encryption key in the repository or outside and use zlib, lz4, or lzma compression. There is a interesting ‘append-only mode’, so you can set that in the ssh key a particular client uses to conact your backup server and then that client can only append, not delete any backup data, which might be nice if you are backing up a bunch of clients to the same repository (and thus getting the deduplication savings), of course you can only run one backup at a time on the same repository, so you would need to spread them out or keep them retrying or something.Likely not a big deal for my home network as I only have the laptop, and a main server to backup here, but in larger setups could be a problem.
So, after a bit of cleanup on my laptop (I had some old copies of emails in several places, photos in a few places, junk I no longer needed/wanted, etc), I fired off a initial borg backup to my storage server. It’s still running now, but it seems to be going along pretty nicely. As soon as I have a full backup, I’ll try a restore of a few random files to make sure all is well.The final part of my backups is moving the backups from my storage server ‘off site’. In past years I copied my backups to an encrypted external drive and gave it to a friend to store, but I don’t have such a local person here, so I will need to investigate amazon glacier or other options, look for another post on that soon!
If you are reading this, perhaps you could take a minute to think about your backups, make sure they exist, are running ok and work.