DragonFly users List (threaded) for 2009-02
DragonFly BSD
DragonFly users List (threaded) for 2009-02
[Date Prev][Date Next]  [Thread Prev][Thread Next]  [Date Index][Thread Index]

Re: Hammer or ZFS based backup, encryption


From: Freddie Cash <fjwcash@xxxxxxxxx>
Date: Sun, 22 Feb 2009 22:09:33 -0800

On Sat, Feb 21, 2009 at 10:39 AM, Csaba Henk <csaba.henk@creo.hu> wrote:
> I need to setup a backup machine, and I intend to utilize today's
> snapshotty filesystems (which boils down to Dfly+Hammer or FBSD+ZFS --
> btrfs is not there yet, and I don't feel like dwelving into Solaris).
> Set up such an OS with such an fs, and backup by syncing to the
> snapshotty fs and create a snapshot.
>
> I wonder about the following things:
>
> 1) Any idea how does this approach scale related to more conventional solutions,
> like rdiff-backup or dump(8)? I see the the pros, but are there any
> cons? How effective is taking regular snapshots space-wise?
>
> 2) Is there any practical argument for choosing between Dfly+Hammer and
> FBSD+ZFS? (Feel free to give biased answers :) )
>
> 3) I'd like to encrypt stuff, either at device or fs level. For
> FreeBSD there is geli(8). I haven't found anything for DragonFly.
> Is there any way to get at it on DragonFly?

We do this at work, using FreeBSD 7.1 and ZFS, for backing up over 80
remote Linux and FreeBSD servers, and 1 Windows station.  We have two
servers, one that does the backups every night, and another that
mirrors the backups during the day.

The main backup server is:
  - Chenbro 5U rackmount case with 24 drive bays and hot-swappable
SATA backplane
  - Tyan h2000M motherboard
  - 2x Opteron 2200-series CPUs @ 2.6 GHz (dual-core)
  - 8 GB ECC DDR2-667 SDRAM
  - 3Ware 9650SE PCIe RAID controller
  - 3Ware 9550SXU PCI-X RAID controller
  - 24x 500 GB SATA HDs (Seagate and Maxtor)
  - 1350W 4-way hot-swappable PSU

All 24 drives are configured as "Single Disk" arrays, and show up as
24 separate SCSI devices in the OS.  This allows the RAID controller
to use the 256 MB of onboard RAM as another level of cache and allows
us to use the 3dm2 management console (as opposed to JBOD-mode where
it becomes just a dumb SATA controller).

The main backup server has 2x 2GB CompactFlash in IDE adapters that
house the base OS install.  The second backup server has 2x 2GB USB
sticks for the OS install.  All space on all 24 drives is used for the
ZFS pool.

The main backup server has a single 24-drive raidz2 dataset.  Not
optimal, but we didn't know any better back then.  :)  The second
server has 3x 8-drive raidz2 datasets.  Not as much usable space, but
better performance and redundancy.

/ and /usr are on the CF/USB.  Having /usr on there makes single-user
mode a lot nicer to use.  /usr/local, /home, /usr/ports,
/usr/ports/distfiles, /usr/src, /usr/obj, /var, /tmp, and /backups are
ZFS filesystems with various properties set (like compression on
/usr/src, /usr/ports, and /backups).

We have a custom rsync script that does the backups of the remote
servers everynight, and that creates a snapshot named after the date.
And another custom script that does an rsync from a snapshot on the
main server to the backups directory on the second server, and then
creates a snapshot on there.

After the initial rsync of a server, which can take several days as it
can easily max out an ADSL link's upload bandwidth, the daily run
takes about 6 hours, most of which is waiting for rsync to generate
the file listing.

This setup works wonders, and has been used to re-image servers,
restore files from backups, and even re-set the permissions/ownerships
for /home after a teacher did a "chown -R" on /home by mistake.  Being
able to cd into a snapshot directory and directly access the files is
a god-send.

We've been running this setup since August 2008.  Disk usage so far is
just over 6 TB.  Daily snapshots average <10 GB.  With ~10 TB of drive
space in each server, we won't run out of space for a while yet.  And
when we get under 1 TB of free space, we just start swapping drives
out for larger ones and the extra space is automatically added into
the pool.  Theoretically, we can put just under 50 TB of disk into
these systems.  :)

For the curious, these boxes cost under $10,000 CDN.  We like to
mention that when the storage vendors call with their $50,000 US
"budget" storage systems with 5 TB of disk space.  :D  They tend to
not call back.

-- 
Freddie Cash
fjwcash@gmail.com



[Date Prev][Date Next]  [Thread Prev][Thread Next]  [Date Index][Thread Index]