pvaneynd: (Default)
A had a long string of problems with our server at home... Read more... )
pvaneynd: (Default)
Now that my server at home is running FreeBSD on top of ZFS for a while now I though to setup the backups again.

Now I have 3 disks in the server, 2 are reliable and are used in the main ZFS pool (zroot), one is a POC WD 'green' disk that will die if you use it too often. So that one is my local backup disk.

My backup strategy is that I configured daily snapshots on zroot, I have the local backup disk and I'm using a USB hard disk for off-site backups.

At first I thought to use UFS2 on the third disks with rsync, like I did with Linux. Then I read a bit more about the ZFS functions and decided to use ZFS on the third disk. I created a 'fastbackup' pool and then used zfs send and zfs receive to sync the two pools. Syncing the disks was fast. Very fast indeed, about as fast as a simple 'dd' would have been.

However ZFS's magic does not end here. It has the option of sending incremental changes that happened between snapshots. So I wrote a script that makes a new snapshot ("nu" now in Dutch), sends the incremental changes to 'fastbackup' and then moved the reference snapshot forward. This to me seemed to be faster then using rsync. which would always take at least 5 minutes to declare that the filesystems were in sync. ZFS is ... faster:

+ zfs snapshot zroot/usr@nu
+ zfs send -vi zroot/usr@laatste zroot/usr@nu
+ zfs receive -Fv fastbackup/usr@nu
receiving incremental stream of zroot/usr@nu into fastbackup/usr@nu
received 597MB stream in 10 seconds (59.7MB/sec)

So that's a day worth of changes, including building and installing emacs and clisp, in 10 seconds.

Script I used below the cut.fastbackup.sh )
pvaneynd: (Default)
After a bit of work:

[root@frost ~]# uname -a
FreeBSD frost.local 8.2-RELEASE FreeBSD 8.2-RELEASE #0: Thu Feb 17 02:41:51 UTC 2011 root@mason.cse.buffalo.edu:/usr/obj/usr/src/sys/GENERIC amd64
[root@frost ~]# zpool status
pool: zroot
state: ONLINE
scrub: none requested

zroot ONLINE 0 0 0
mirror ONLINE 0 0 0
gpt/disk1 ONLINE 0 0 0
gpt/disk2 ONLINE 0 0 0

errors: No known data errors

[root@frost ~]# df -h /
Filesystem Size Used Avail Capacity Mounted on
zroot 1.4T 329M 1.4T 0% /

Turns out that converting from Debian to FreeBSD isn't that easy: in practice they share no advanced filesystem anymore, so I had to resort to:

tar cf /dev/sdc1 /home/ /root/ /etc/ /Media/ /Backups/

which is not very elegant. Now learning about the wonders of pkg_add and freebsd-update.

First thing: how to compile a kernel only for my hardware...
pvaneynd: (Default)
- a fuse can break
- it is most likely to do so just when people come to see if they want to buy your house
- a lamp has almost 0 resistance so you can check the wiring from the fuse box
- FreeBSD does not know lvm or dm so it is not so easy to copy information across
- fuse-zfs is not good enough to mount the FreeBSD zfs partitions
- the intel BIOS on my PC will not boot from a disk when no partition is marked as active
- installing FreeBSD mbr on a 2T disk fails, using ZFS on GPT works.

Open questions:
- I can install packages with "pkg_add -r <foo>", but how can I check for updates and update them?
pvaneynd: (Default)
I'm experimenting with zfs at home, for the moment on top of my md/lvm setup, and I ran out of disk space. Growing the lv is pretty easy:

frost:~# lvextend --size +110G /dev/new-vg/zfs-test
  Extending logical volume zfs-test to 120.00 GiB
  Logical volume zfs-test successfully resized
frost:~# zpool list
zfs-pool  9.94G  9.78G   161M    98%  1.00x  ONLINE  -

Hmm it did not notice the 110GB extra, so I did:

frost:~# zpool export zfs-pool
frost:~# zpool import zfs-pool
frost:~# zpool list
zfs-pool   120G  9.78G   110G     8%  1.00x  ONLINE  -

so simply doing an import/export is enough.

I'm looking at zfs to have a better idea of what btrfs will mean in the future for us.


pvaneynd: (Default)

October 2015

1819202122 2324


RSS Atom

Most Popular Tags

Style Credit

Expand Cut Tags

No cut tags
Page generated Feb. 28th, 2017 09:57 am
Powered by Dreamwidth Studios