This feed contains pages in the "tech" category.
Backup solution
My previous backup strategy was to copy most of my home directory when someone I know had computer problems. This clearly isn't a good solution, but it mostly comes down to it being too much effort, and I'm too lazy.
Requirements
Hence, I decided to implement a system that would be automatic (without regular input from me). This means it should actually happen! My criteria look something like
- Scriptable (so I don't need to run it manually)
- Incremental (so it doesnt need to re-transfer all data each time)
- Off-site (as one of the main failure modes I'm concerned about is an earthquake destroying my house...
- Regarding off-site, preferably not needing shell access
- Encrypted (see off-site)
- Open source
The software I settled on was restic. While there are lots of other options out there (eg bup, bacula, borg, duplicity, rsync, rclone, rdiff-backup), I liked restic's support for Amazon S3 (for which I already had an account, other cloud providers are available), and relative ease to configure. However, I didn't try any of the other options, I'm sure most of them are good too. See https://wiki.archlinux.org/index.php/Synchronization_and_backup_programs for a good collection.
I want this to just run in the background, so I am using systemd timers to run things automatically. My plan was to run a backup every day, so a daily timer seems to be a good idea. However, I found that often the task would fail (due to lack of a network connection), and so miss the daily backup. Hence I have gone to a half-hourly script, that checks when a backup as last run. This should ensure that backups are run sufficiently often.
Scripts
Here are the contents of a few files
~/scripts/restic-env-S3.sh
#!/bin/sh
export RESTIC_REPOSITORY="s3:https://s3.amazonaws.com/MY_BUCKET_NAME"
export AWS_ACCESS_KEY_ID="MY_AWS_KEY_ID"
export AWS_SECRET_ACCESS_KEY="MY_AWS_ACCESS_KEY"
export RESTIC_PASSWORD="MY_RESTIC_PASSWORD"
This file defines parameters needed to access the repository. Obviously, if not using Amazon S3, the RESTIC_REPOSITORY format will be different. I have one of these files for S3, and one for my USB HDD.
~/scripts/backup.sh
#!/bin/sh
#Must pass as argument the env file
. $1
if [ "x$RESTIC_REPOSITORY" = "x" ]; then
echo "RESTIC_REPOSITORY must be set"
exit 1
fi
FORCE_BACKUP=0
if [ "x$OVERRIDE_TIMESTAMP_CHK" = "x1" ]; then
echo "Forcing backup [$RESTIC_REPOSITORY]"
FORCE_BACKUP=1
fi
TOUCH_FILE="$HOME/backup_touches/$(echo $RESTIC_REPOSITORY | sha512sum -|cut -f1 -d' ')"
FEXISTS=$(test -f $TOUCH_FILE;echo $? )
FRECENT=$(find "$(dirname $TOUCH_FILE)" -mtime -1 -name "$(basename $TOUCH_FILE)" 2>/dev/null | grep -q "." ;echo $? )
if [ $FEXISTS -eq 1 -o $FRECENT -eq 1 -o $FORCE_BACKUP -eq 1 ];
then
sleep 10
echo "Backing up, as no backup made in last day [$RESTIC_REPOSITORY]"
if ~/bin/restic backup --tag kronos /etc /home --exclude-file=$HOME/scripts/excludelist ;
then
echo "Backup succeeeded [$RESTIC_REPOSITORY]"
touch "$TOUCH_FILE"
$HOME/scripts/forget.sh
else
echo "Problem with backup [$RESTIC_REPOSITORY]"
exit 2
fi
exit 0
else
echo "Not backing up, as there is a recent backup [$RESTIC_REPOSITORY]"
fi
This script takes as an arguemnt the previous file (defining the repository parameters), and actually runs the backup. It only runs the backup if the relevent file is older than 1 day. That could be adjusted to another period of time, if desired. Alternatively, if OVERRIDE_TIMESTAMP_CHK is 1, then it runs the backup.
~/scripts/forget.sh
#!/bin/sh
#Must pass as argument the env file
. $1
if [ "x$RESTIC_REPOSITORY" = "x" ]; then
echo "RESTIC_REPOSITORY must be set"
exit 1
fi
echo "Starting to forget [$RESTIC_REPOSITORY]"
if restic forget -y 100 -m 12 -w 5 -d 7 ; then
echo "Forgotten; what was I doing again? [$RESTIC_REPOSITORY]"
else
echo "Problem forgetting [$RESTIC_REPOSITORY]"
exit 1
fi
This removes old snapshots, such that we keep 7 daily, 5 weekly, 12 monthly, and 100 yearly snapshopts. Howevery, no information is removed from the repository, a prune command is required for that (periodically), and I haven't automated that.
Those are the scripts necessary to run backups. I'm sure they could be made better, but they seem functional enough for now.
I also am using systemd to run them.
~/.config/systemd/user/backup-S3.timer
[Unit]
Description=Backup using restic on a timer
Wants=network-online.target
After=network.target network-online.target
[Timer]
OnCalendar=*:0/30:00
Persistent=true
[Install]
WantedBy=timers.target
~/.config/systemd/user/backup-S3.service
[Unit]
Description=restic backup to S3
After=systemd-networkd-wait-online.service
Wants=systemd-networkd-wait-online.service
[Service]
Type=simple
Nice=10
Restart=on-failure
RestartSec=60
Environment="HOME=%h"
ExecStart=%h/scripts/backup.sh %h/scripts/restic-env-S3.sh
[Install]
WantedBy=default.target
These systemd user service and timer files work for me. But, I would say this was the hardest part of all of this. Specifically, this service will run on startup if the computer was off (or asleep, etc) when it was scheduled. But, it will run before the network is properly connected, and so fail. That is what the After, Wants lines are meant for. But, they don't work properly (or I don't understand what they mean exactly). Hence I added the Restart=on-failure, so it will retry 60s later in that case. I assume there is a better way to do this.
For backing up to USB HDD, I have replaced the last block with
[Install]
WantedBy=dev-disk-by\x2duuid-MY_UUID.device
and removed the After, Wants lines, in backup-HDD.service (and corresponding backup-HDD.timer). Thus, it runs the script every 30 minutes, and whenever the device is connected (which is preferable for an external drive).
The timer and service are enabled with
systemctl --user daemon-reload
systemctl --user enable backup-S3.service
systemctl --user start backup-S3.timer
I am actually running these as a user restic, so I have also run
sudo loginctl enable-linger restic
(note: I access a shell for user restic with sudo -u restic bash, but also need to run export XDG_RUNTIME_DIR=/run/user/1002, where 1002 is the UID of restic, to be able to run the systemctl command)
I installed a copy of restic for the user restic, and on the binary ran
sudo setcap cap_dac_read_search=+ep ~restic/bin/restic
so that it would have access to all files. This way, I can avoid running as root, yet still back up all files.
Resources
Some sites I found helpful in doing this:
- https://www.digitalocean.com/community/tutorials/how-to-back-up-data-to-an-object-storage-service-with-the-restic-backup-client
- https://jpmens.net/2017/08/22/my-backup-software-of-choice-restic/
- https://blog.filippo.io/restic-cryptography/
- https://restic.org
- https://restic.readthedocs.io/en/latest/080_examples.html
Here is a brief list of patches that I want a record of:
- comments.diff - (This is no longer required since ikiwiki 3.20100302, where similar functionality was included) adds a comment count to blog posts in ikiwiki.
- prettyname.patch - lists full, not usernames, under recentchanges in ikiwiki (for git only atm).
This page lists some programming stuff I've done; it is not a complete list.
VUW Timetable application
You should realise that this program is a collection of bugs, which sometimes work together to produce some useful results. However, any deviation from a certain input will result in unpredictable output. There is a todo list.
20130930
- Make load correct website for the specified year, so it doesn't just always use current year.
20130928
- Update to work with new VUW website. I hope they don't change it again anytime soon...
20101112
- Make the saved images use the correct-sized canvas
20100105
- Add in new Alan MacDiarmid (AM) building
- Make easier to disable s/CHEM/ALCH/ stuff
20090713
- Fix streams
- Implement s/CHEM/ALCH/
VUW Wireless Login (VUWWL Prefs)
20120328 Initial release
- This was developed to automatically manage the web-based authentication of the wireless at VUW.
- The idea is there is an open wifi network, but to have access to the internet one must authenticate with some webpage. This is a pain. Especially as every time the wifi connection is lost, authentication must be performed again. Hence, I wrote an android app, that should work for Android 2.1 onwards (tested only on CM7.2/Android 2.3.7). Any time the wifi connection changes, it checks the network name, and if it's the correct one (configurable), then it tries to log in.
- The website it tries to authenticate with is configurable, as are three fields (default to username, password, domain). Also, there are non-configurable fields Submit=Submit, and buttonClicked=4. The request is sent as POST not GET, though in the future this may be selectable.
- It is possible to manually request it tries to log in, but the network name must still match. There also must be a wifi connection.
20120710 Update for ICS compatibility
- Removes the various notifications, which might be less annoying.
- Now works on ICS, as the notifications appeared to break it.
- In a future version, should be able to have some arbitrary number of rules.
20120729 Adds support for more than one rule
- Now supports an arbitrary number of distinct rules, with some user-chosed ID.
- Only runs rules that match the SSID of the connected network.
- Allows deleltion, but not deactivation, of rules.
- Still shows no notifications, this is certainly less annoying.
Various patches
This list post (http://www.mail-archive.com/debian-bugs-dist@lists.debian.org/msg503765.html) includes a fix for python-4suite-xml, which is used by opensync-plugin-googlecalendar.
Okay, this is further to Move an existing system to LVM on RAID1, but instead of moving an existing system, we are using the LiveCD (not alternate) to install onto LVM on top of RAID1.
- Boot the LiveCD you want to install from (eg 9.10 alpha4 is what I used, there are newer versions out already though).
Install mdadm and lvm
:/# apt-get install mdadm lvm2
If you don't have your raid-array set up yet, do that now. Instructions in Move an existing system to LVM on RAID1. Also set up the lvm array.
If you have the array set up,
:/# mdadm --assemble --scan
to set it up.
- If you like, you can set up partitions for the installation outside the installer, which is what I do.
- Start the installer. Just do it as normal, making sure you pick manual partitioning.
- Once installed, you can't just reboot. You won't be able to boot into your new system. Instead, you must start a chroot, install mdadm and lvm, as in the previous post (don't forget to modify /etc/modules too).
- Reboot into your new system. That should be everything.
You should make backups of stuff before trying this
My computer had a single 750GB SATAII hard drive in it, with a variety of LVM partitions (/, /home) and non-LVM ones (/boot, swap, some other random ones). I wanted to consolidate slightly, and set up a RAID in case a drive died... (I do have backups of important stuff, but the downtime would be a pain) I also wanted to avoid reinstalling - while it is easy (using Jaunty), I didn't have a 64-bit jaunty cd (only 32-bit ones).
There are some things you could do here that should make stuff easier, but I didn't.
- Boot into a LiveCD. I used the 32-bit desktop cd from shipit.
- Install mdadm and lvm2
- Set up a partition, type 0xfd, covering all of one, blank drive
Now we need to create our md array
#mdadm --create /dev/md0 --level=1 --raid-devices=2 /dev/sdXN missing
where X and N identify the partition you created - for me /dev/sdb1 . Next, create an LVM array, so
#pvcreate /dev/md0
#vgcreate raid /dev/md0
We have now allocated all of the space on the partition to LVM. Now, create a few lv's:
#lvcreate -L300M -nboot raid
#lvcreate -L20G -njaunty raid
#lvcreate -L30G -nhome raid
#lvcreate -L4G -nswap raid
Adjust sizes as necessary. I have a separate boot partition, though I'm not sure this is necessary with this setup. Now create filesystems
#mkfs.ext4 /dev/mapper/raid-boot
#mkfs.ext4 /dev/mapper/raid-home
#mkfs.ext4 /dev/mapper/raid-jaunty
#mkswap /dev/mapper/raid-swap
Obviously, you can use a different fs if you like. Then, we copy across the data from our old system.
#mount /dev/mapper/raid-boot /mnt/boot-new
#mount /dev/sdXN /mnt/boot-old
#cd /mnt/boot-new
#cp -a ../boot-old/* .
Those directories will probably have to be created first. Use X,N as appropriate. Repeat for the root partiton, and home, etc. If you don't have a separate boot partition, don't worry about the boot one (I think you shouldn't need it.)
Now comes the bit that didn't work -- I was using a 32-bit cd, so couldn't chroot into the new system. So reboot back into the old system. Then,
#mdadm --assemble /dev/md0 /dev/sdb1
#mount /dev/mapper/raid-jaunty /mnt/jaunty
#cd /mnt/jaunty
#mount /dev/mapper/raid-boot boot
#mount -o bind /dev/ dev
#mount -t proc none proc
#mount -t sysfs none sys
#mount -t tmpfs none tmp
#chroot .
#echo md >> /etc/modules
#echo lvm >> /etc/modules
#echo raid1 >> /etc/modules
#update-initramfs -k all -u
#apt-get install grub2
grub2 is needed as we're booting off lvm and raid
#update-grub2
#grub-install /dev/md0 --recheck
This would have been easier if we'd put the md, lvm and raid1 in /etc/modules before copying the os, and run update-initramfs then. I would advise doing that in the future. If you don't do it, then you can't boot, as it can't find /. Also, while you're there, update /etc/fstab on the new system to point to the right places. It's easiest to use UUIDs to identify things, as then you don't have to worry about paths.
Now reboot back into the new system, which should start fine.
I haven't added the other drive to the array yet, will update when I've done that.
Once all of the data have been copied across, you can wipe the partition table on the first drive. Make sure you have all of the data copied, and backed up, as this will delete everything on your first drive.
Create a new partion exactly as large (or larger than) the raid partition you made already. Set its type to 0xfd again. Now we add the other drive into our array:
#mdadm --manage --add /dev/md0 /dev/sda1
Done. You can monitor the copying with
cat /proc/mdstat
Some sites I found helpful in doing this:
- https://help.ubuntu.com/community/Installation/RAID1%2BLVM (general)
- http://support.uni-klu.ac.at/Raid1Howto (about generating initramfs)
- http://grub.enbug.org/LVMandRAID (the last paragraph especially is reassuring)
- https://help.ubuntu.com/community/Installation/LVMOnRaid (old, but helpful. No longer need LILO)
- http://tldp.org/HOWTO/Software-RAID-HOWTO.html
- http://tldp.org/HOWTO/LVM-HOWTO/
- http://wiki.tldp.org/LVM-on-RAID (I didn't use this, but looks helpful)
So, an NTFS-formatted hdd that had some files that I'd rather not lose started having some issues. Basically, the BIOS had a hard time detecting it (it slowed the boot process down a lot), and windows wouldn't boot, failing at various stages. As well, it was unmountable in ubuntu.
First, I put it in my computer, and set up a partition of the same size to copy to. Using ddrescue http://www.gnu.org/software/ddrescue/ddrescue.html (in package gddrescue in ubuntu), I copied the good data off it (slowly - this took about a week). Then, for a while, I let it have a go at recovering some of the data it had missed. Still, I was unable to mount the drive, getting errors about $MFT and $MFTMirr. So, looks like the master file table is damaged.
I had a look with testdisk http://www.cgsecurity.org/wiki/TestDisk, but this didn't help any. I eventually settled on using fls (in sleuthkit) to extract a list of viable filenames, and inodes.
#fls -r -u -p /dev/sda9 > /tmp/filenames
grep was used to cut the file to just the filenames I was interested in, and I disregarded those lines with *s in them, as they seemed to mean some corruption. I used a script from http://forums.gentoo.org/viewtopic-t-365703.html to do the rest, though as there were spaces in some of the directories, I was forced to modify the script so that it took all of the filename as one of the matches (I used sed, though cut would work better). Also, the indices in this were wrong for my input file, but that was easy enough to fix.
This recovered essentially all of the files; I'm sure there were a few it was unable to get, but I was still quite pleased.
Sometimes you might install (say) kubuntu-artwork-usplash, when really you want the ubuntu usplash screen (provided by usplash-theme-ubuntu). To restore the original,
sudo apt-get remove kubuntu-artwork-usplash
This has actually happened to me a few times. Before, I think I had to muck about with update-initramfs, update-alternatives, ... But now it seems much simpler. I suppose if you really want to keep the kubuntu one then
sudo apt-get install --reinstall usplash-theme-ubuntu
might work, though I haven't tried that.
Edit: Apparently it's kubuntu-artwork-usplash
Also, this was done on 8.10, might not work on later versions.
This is the first post to this example blog. To add new posts, just add files to the posts/ subdirectory, or use the web form.