2d83a44b6d09fc262a2cd078aff83fd6b245521a — Arsen Arsenović 3 months ago 330bffe master
publish 2022-02-15-sweet-unattended-backups.rst
3 files changed, 364 insertions(+), 2 deletions(-)

M css/main.css
A posts/2022-02-15-sweet-unattended-backups.rst
M posts/newpost
M css/main.css => css/main.css +39 -2
@@ 38,6 38,43 @@ a {

pre.sourceCode {
	background-color: #f3f3f3;
div.note > div.title {
	margin: auto 0.5ch;
	text-transform: uppercase;
	text-align: center;
	vertical-align: middle;
	font-weight: bold;

div.note > p {
	display: block;
	margin: 1ch;

div.note {
	clear: both;
	margin: auto;
	background-color: #eee;
	padding: 0.2ch 2ch;
	display: flex;
	flex-direction: row;
	max-width: 80%;

div.sourceCode {
	position: relative;

div.sourceCode[data-caption]::before {
	content: attr(data-caption);

	display: block;
	position: absolute;
	top: 0.4em;
	right: 0.4em;

	font-family: monospace;
	font-size: 0.66em;
	line-height: 1;
	opacity: 0.5;

A posts/2022-02-15-sweet-unattended-backups.rst => posts/2022-02-15-sweet-unattended-backups.rst +322 -0
@@ 0,0 1,322 @@
title: Unattended backups with ZFS, restic, Backblaze B2 and systemd
date: 2022-02-15
tags: restic, administration, linux, zfs
description: >-
    Regular, incremental and convenient backups (ish), without interference
In the past, whenever I had data loss due to hardware failure, I'd just take
it on the chin and reassemble as much as possible.
Due to crappy internet connections (low upload DOCSYS or, even worse, ADSL)
backing up was simply infeasible (for instance, one attempt took about two
weeks of continuous uploading followed by day-long incremental updates).

That changed recently, since I moved and got fiber installed, allowing for an
acceptable 100Mbps upload.

The other major reason why I didn't do regular wide scope backups is
convenience: I would've had to remount :code:`/home` as read-only, which is
lots of inconvenient downtime and either manual or highly intrusive work.

Every Monday morning, at around 3AM, automatically run an incremental backup to
Backblaze (since it's pretty cheap).
Have it be scheduled at a very low priority, so that it doesn't interfere with
normal computer use and report failure via mail [#mail]_.

.. [#mail] The system I have set up currently is fully local, I'd like to
   have null clients produce and email me results in the future.

.. code-block:: sh

   [i] ~$$ systemctl status restic-weekly.service
   ○ restic-weekly.service - Weekly unattended /home backups
        Loaded: loaded (/etc/systemd/system/restic-weekly.service; static)
        Active: inactive (dead) since Mon 2022-02-14 03:24:45 CET; 2 days ago
   TriggeredBy: ● restic-weekly.timer
       Process: 963988 ExecStart=/home/execute_backup.sh (code=exited, status=0/SUCCESS)
      Main PID: 963988 (code=exited, status=0/SUCCESS)
           CPU: 4min 33.140s

   Feb 14 03:24:34 bstg execute_backup.sh[964064]: Added to the repo: 5.833 GiB
   Feb 14 03:24:34 bstg execute_backup.sh[964064]: processed 1534343 files, 352.944 GiB in 24:27
   Feb 14 03:24:34 bstg execute_backup.sh[964064]: snapshot 9fce7039 saved
   Feb 14 03:24:35 bstg execute_backup.sh[963988]: + _clean
   Feb 14 03:24:35 bstg execute_backup.sh[963988]: + cd /
   Feb 14 03:24:35 bstg execute_backup.sh[963988]: + sleep 5
   Feb 14 03:24:40 bstg execute_backup.sh[963988]: + zfs destroy zhome@restic2022_07_1
   Feb 14 03:24:45 bstg systemd[1]: restic-weekly.service: Deactivated successfully.
   Feb 14 03:24:45 bstg systemd[1]: Finished Weekly unattended /home backups.
   Feb 14 03:24:45 bstg systemd[1]: restic-weekly.service: Consumed 4min 33.140s CPU time.

Snapshots as an alternative to downtime
Backing up a file system that is in use can lead to various kinds of data
consistency problems and thus it is preferable to operate on snapshots instead.

.. NOTE::
   Snapshots alone do not solve all issues of concurrent writes, but I've
   decided that it's good enough for my uses.
   Non-atomic operations (e.g. a single state being updated in two files) could
   still lead to inconsistent on-disk content, but snapshots reduce the
   time frame in which this is may happen to milliseconds or less.

In anticipation of properly implementing backups, I began using ZFS for my
:code:`/home` and did some maintenance starting with the removal of old,
unused files. I also made sure to add cache tags and exclude markers where

ZFS exposes snapshots at :code:`$$MOUNTPOINT/.zfs/snapshot/$$LABEL`, but these
act like separate devices (they have a different device ID) and are on a
different mountpoint.
This will be important later.

restic setup
restic is a relatively new backup tool that I picked because it seems fairly
robust, easy to use and well implemented.
I gave it's `design document`_ a quick review and it seemed appropriate.

Snapshots restic produces reference the mount point and device IDs, along with
inode numbers, to detect hardlinks, which is slightly problematic since both of
those are different on snapshots.
Thankfully, there are two outstanding PRs (`#3200`_ and `#3599`_) that fix this
One backport and :code:`git format-patch` later, they're ready to be dropped
into :code:`/etc/portage/patches/app-backup/restic` and forgotten about until
they break an update.

.. _`design document`: https://github.com/restic/restic/blob/fb4c5af5c4613866931773849dd8bf4755d0d2ce/doc/design.rst
.. _`#3200`: https://github.com/restic/restic/pull/3200
.. _`#3599`: https://github.com/restic/restic/pull/3599

restic backups operate on remote repositories, in my instance a bucket
on Backblaze B2, an object storage service.
The repository to operate on can be set via the :code:`RESTIC_REPOSITORY`
environment variable, and requires a password provided by
The B2 backend also requires an account ID and key, provided in
I format these so that they can be :code:`eval`'d by a shell and encrypt them
with :code:`systemd-creds encrypt --name rcreds - /var/lib/backup_creds`

Repositories have to be initialized with :code:`restic init`.
This operation sets up the basic structure and puts keys in place.

At this point, it'd be wise to copy this file and store it somewhere safe
(perhaps in a `password manager <https://passwordstore.org/>`_).

systemd and timers
systemd_ provides :code:`.timer` units with some nifty features.
Most useful among these are :code:`Persistent=`, which acts like Anacron and
:code:`WakeSystem=`, which can resume the system from sleep.

.. _systemd: https://systemd.io/

.. code-block:: ini
   :caption: restic-weekly.timer
   :emphasize-lines: 1

   Description=Weekly unattended /home backups (timer)

   OnCalendar=Mon *-*-* 03:00:00


   # vim: ft=systemd :

.. NOTE::
   This timer will wake your computer from sleep and won't put it back to
   sleep after.

:code:`OnCalendar=` defines when to run the event, in this case each Monday at
three in the morning, while :code:`Unit=` makes the timer start
:code:`restic-weekly.service`, which in turns runs the update script:

.. code-block:: ini
   :caption: restic-weekly.service

   Description=Weekly unattended /home backups
   OnFailure=status-email-arsen@%n.service      # {1}

   Type=oneshot                                 # {2}
   Nice=19                                      # {3}
   IOSchedulingClass=idle                       # { }

   # vim: ft=systemd :

   TODO(arsen): better element for code callouts

#. When the backup fails, start
   :code:`status-email-arsen@restic-weekly.service` service (the :code:`%n`
   expands into the name of the current unit),
#. Only run the service once,
#. Run with the lowest CPU priority (19) and the lowest IO scheduling class
   This ensures the system remains virtually unaffected by the snapshot, as
   CFS will allocate the restic threads only idle time, as well as only doing
   their I/O when there is no other work to do.
   As we are operating on a snapshot, this extended period is not an issue, and
   we can focus on not being intrusive to the user.
   See manual page :code:`sched(7)` for more info.

Last of these units is email failure reporting.
To get emails to work, I installed OpenSMTPD_ to deliver mail to my local
mailbox, and upon doing that I've found out that over the last five years I've
accumulated around 350 thousand emails in my spool, sent in response to
failures in cron jobs.

.. _OpenSMTPD: https://www.opensmtpd.org/

The status email unit is a short bit of boiler plate:

.. code-block:: ini
   :caption: status-email-arsen@.service

   Description=status email for %i to user

   ExecStart=/usr/local/bin/systemd-email arsen@%H %i

... and the script it invokes:

.. code-block:: bash
   :caption: systemd-email

   [ -n "$$1" ] || exit 1
   set -xeu

   /usr/bin/sendmail -t <<ERRMAIL
   To: $$1
   From: systemd <root@$$(hostname)>
   Subject: $$2
   Content-Transfer-Encoding: 8bit
   Content-Type: text/plain; charset=UTF-8

   $$(systemctl status --full "$$2")

The user in this unit is set to :code:`nobody`, which systemd will complain
Ignore that warning, the advertised alternative (:code:`DynamicUser=`) will
not work for us since it forcefully restricts SUID/SGID, which prevents
:code:`sendmail` from switching to the email group to submit to the spool.

Pulling it together
With that out of the way, the bulk of the work is handled by a single script:

.. code-block:: sh
   :caption: execute_backup.sh

   eval "$$(systemd-creds decrypt --name rcreds /var/lib/backup_creds -)"
   : "$${B2_ACCOUNT_ID:?B2_ACCOUNT_ID unset}"
   : "$${B2_ACCOUNT_KEY:?B2_ACCOUNT_KEY unset}"

   set -exu

   export RESTIC_CACHE_DIR=/var/cache/restic
   [ -t 0 ] || export RESTIC_PROGRESS_FPS=0.0016666

   snapshot="$$(date +restic%+4Y_%U_%u)"
   zfs snap "zhome@$$snapshot"

   _clean() {
           cd /  # free up the dataset for destruction
           sleep 5 # ?????????????
           zfs destroy "zhome@$$snapshot"
   trap _clean EXIT

   cd /home/.zfs/snapshot/"$$snapshot"
   restic backup \
           --exclude .cache \
           --exclude-caches \
           --exclude '*/dls/' \
           --exclude-if-present .resticexclude \
           --device-map "$$(stat -c '%d' .):$$(stat -c '%d' /home)" \
           --set-path /home \

Going over the blocks one-by-one:

#. We load the credentials from the previously encrypted file, check that we
   get all the parameters needed, then
#. We tell restic to use :code:`/var/cache/restic` as the cache directory, as
   it will default to :code:`$$XDG_CONFIG_HOME/restic` if unspecified, then
#. If not running in a :code:`tty`, only update the progress of the backup once
   per ten minutes, as to not spam logs, then
#. We take a snapshot, called :code:`restic%+4Y_%U_%u`, in order to have a
   unique value and know if a cleanup fails after the fact, the
#. We use the :code:`EXIT` trap to clean up after ourselves,
#. We move into the newly taken snapshot, in order to help restic store the
   correct paths in the snapshot, which is later helped by :code:`--set-path`,
   which changes the stored path to the backup from the snapshot directory
   into the home directory, effectively obscuring the fact we ever operated on
   a snapshot, then
#. We initiate the backup, excluding all :code:`.cache` directories, all
   directories tagged with :code:`CACHEDIR.TAG` [#cachedir-tag]_, all download
   directories directly inside :code:`/home/*` directories, all directories
   marked with a :code:`.resticexclude` file; then it maps the new snapshots
   device ID to the normal home mounts device ID, in order to preserve
   unchanged files' status, and then finally remaps :code:`.` to :code:`/home`.

I'm unsure about why the delay on the :code:`cd` is necessary; I'd have to
recompile ZFS to dump all open files on a :code:`zfs destroy`, or something of
that nature, but I haven't had an opportunity to do that yet.

.. [#cachedir-tag] See `this page <https://bford.info/cachedir/>`_ for more
   info on :code:`CACHEDIR.TAG` files.
   Not all programs use them, but many well behaved ones do.
   A notable exception is Chromium, sadly.

This setup does not respect the `3-2-1 rule`_; Backblaze is, for my personal
data, sufficiently robust, and most importantly, inexpensive.

Currently, as I mentioned, failure notification delivery is entirely local.
While I do think that email is the easiest way to do this, it would require
non-local delivery and additional monitoring in order to make it reliable (as
currently power outages go undetected, and delivery to a remote inbox does not
I am likely going to look into creating a VPN to connect all machines being
monitored together for notification delivery, and add additional monitoring
for "high availability" [#ha]_ machines, though that is quite likely to not
happen any time soon.

Think carefully about what data you want to back up, and don't shy away from
dotting around exclude files: build artifacts are not worth backing up!

This post does not cover :code:`restic forget`. I intend on using it when need
be (= costs grow noticeably), rather than as a preventive measure, likely with
:code:`--keep-last 4 --keep-yearly 4` or something of that nature.

.. _`3-2-1 rule`: https://en.wikipedia.org/wiki/Backup#3-2-1_rule
.. [#ha] High availability in this context being >99%, no more digits.

    vim: set ft=rst sw=4 et :

M posts/newpost => posts/newpost +3 -0
@@ 14,5 14,8 @@ cat <<EOF >"$file"
title: $1
date: $slugdate

    vim: set ft=rst sw=4 et :
exec ${EDITOR} "$file"