#################### Secure Rsync Dropbox #################### This is an attempt at creating an indelible dropbox for data over rsync. One might, for example, want to push data to a server and have reasonably high confidence that the client is powerless to delete it. I suggest running this atop a file system which does data unification, such as HAMMER or ZFS, but this is not critical by any means. This is probably not the right thing to be using if your dropbox grows to be dozens of gigabytes big, but for smaller things, it should work fine. Overview ======== We create three sets of files: ``staging``, ``staged``, and ``backup``. Clients write into the ``staging`` area and a post-xfer hook copies (with rsync, of course!) data into the ``staged`` area, being sure to make a time-stamped backup in the ``backup`` area. Since the client does not have (write) access to the ``staged`` or ``backup`` areas, we can be reasonably sure that the client cannot actually delete it. Directories =========== At some path of your choosing, PATH, make a directory tree like this:: drwxr-xr-x 2 root wheel 2 Sep 22 07:24 backups drwxr-xr-x 2 root wheel 2 Sep 22 07:24 staged drwxr-xr-x 2 dropbox dropbox 2 Sep 22 07:24 staging rsyncd.conf =========== First, tell rsyncd about the staging area. Since we're being a little paranoid, we'll have it chroot into the staging area, as well as adopt the user and group IDs of some under-privileged user. For some applications, we might leave out the delete part of the ``refuse options`` directive, allowing the client to *appear* to have deleted files. :: [staging] path = /PATH/staging read only = false timeout = 30 max connections = 1 post-xfer exec = /root/post-staging use chroot = true uid = dropbox gid = dropbox refuse options = delete times /root/post-staging ================== And here's the magic sauce which runs (as ``root``, not as ``dropbox``) after each client connection:: #!shell #!/bin/sh BACKUPS=${RSYNC_MODULE_PATH}/../backups LOGS=${RSYNC_MODULE_PATH}/../logs STAGED=${RSYNC_MODULE_PATH}/../staged UNIQNAME=`date +%s`:`ls ${LOGS} | wc -l | sed -e "s/ //g"` # Stage the files, being sure to make backups /usr/local/bin/rsync -avc --inplace \ --backup --backup-dir ${BACKUPS}/${UNIQNAME} \ --log-file ${LOGS}/${UNIQNAME}.rsy \ --out-format "XFER %n" \ ${RSYNC_MODULE_PATH}/. ${STAGED}/. \ > ${LOGS}/${UNIQNAME}.fns # Build a manifest of hashes of staged contents cat ${LOGS}/${UNIQNAME}.fns | \ sed -ne '/[^\/]$/ s/^.*XFER //p' | \ perl -ne 'my $r; chomp; while (/(.*)\\#(\d\d\d)(.*)/) { $r .= $1 . chr (oct $2); $_ = $3 }; $r .= $_; print $r; print "\0"' | \ (cd ${STAGED}; xargs -0 sha256 -r) > ${LOGS}/${UNIQNAME}.sha