DragonFly users List (threaded) for 2009-02
DragonFly BSD
DragonFly users List (threaded) for 2009-02
[Date Prev][Date Next]  [Thread Prev][Thread Next]  [Date Index][Thread Index]

Re: OT - was Hammer or ZFS based backup, encryption


From: Freddie Cash <fjwcash@xxxxxxxxx>
Date: Sun, 22 Feb 2009 22:21:52 -0800

On Sat, Feb 21, 2009 at 7:17 PM, Jeremy Chadwick <jdc@parodius.com> wrote:
> On Sun, Feb 22, 2009 at 11:59:57AM +1100, Dmitri Nikulin wrote:
>> On Sun, Feb 22, 2009 at 10:34 AM, Bill Hacker <wbh@conducive.org> wrote:
>> > Hopefully more 'good stuff' will be ported out of Solaris before it hits the
>> > 'too costly vs the alternatives' wall and is orphaned.
>>
>> Btrfs has been merged into mainline Linux now, and although it's
>> pretty far behind ZFS in completeness at the moment, it represents a
>> far greater degree of flexibility and power. In a couple of years when
>> it's stable and user friendly, high-end storage solutions will move
>> back to Linux, after having given Sun a lot of contracts due
>> specifically to ZFS.
>
> The fact that btrfs offers grow/shrink capability puts it ahead of ZFS
> with regards to home users who desire a NAS.  I can't stress this point
> enough.  ZFS's lack of this capability limits its scope.  As it stands
> now, if you replace a disk with a larger one, you have to go through
> this extremely fun process to make use of the new space available:
>
> - Offload all of your data somewhere (read: not "zfs export"); rsync
>  is usually what people end up using -- if you have multiple ZFS
>  filesystems, this can take some time
> - zpool destroy
> - zpool create
> - zfs create

According to the ZFS Admin manual, doing an online "replace" of the
drive with a larger one, than a zpool export and zpool import is all
that's needed to make use of the extra space.

In theory, one can replace all the drives in the storage array one at
a time, allowing the resilvers to complete each time, and then just
export/import once, and have a massively larger pool to use.

> And if you add a new disk to the system, it's impossible to add that
> disk to the existing pool -- you can, of course, create an entirely
> new zpool which uses that disk, but that has nothing to do with the
> existing zpool.  So you get to do the above dance.

You can add vdevs to a pool at any time.  Data will be striped across
all vdevs in the pool.  What you can't do is extend a raidz vdev.  But
you can add more raidz vdevs to a pool.

If you create a pool with a 3-drive raidz vdev, you can later extend
the pool by adding another 3-drive raidz vdev.  Or by adding a
mirrored vdev.  Or by just adding a single drive (although then you
lose the redundancy of the entire pool).

I've done this several times when playing around with ZFS on FreeBSD
7.1 on a test system with 24-drives.  Started with a 12-drive raidz2
vdev.  Then addded a 6-drive raidz2.  Then another 6-drive raidz2.
Then played around with 3x 8-drive raidz2 vdevs.  And a bunch of other
setups, just to see what the limitations were.  The only one is that
you can't start with an X-drive raidz2 and later extend that single
raidz2 vdev out to Y-drives, like you can with some hardware RAID
controllers.

> I'll also point out that ZFS on FreeBSD (at least 7.x) performs very
> differently than on Solaris 10.  We use Solaris 10 x86 + ZFS at my
> workplace, and the overall usability of the system during heavy disk I/O
> is much more refined (read: smooth) than on FreeBSD.  It's interesting
> to do something like "zpool iostat 1" on FreeBSD compared to Solaris 10;
> FreeBSD will show massive write bursts (e.g. 0MB, 0MB, 0MB, 70MB, 0MB,
> 0MB, 0MB, 67MB, etc.), while Solaris behaves more appropriately (50MB,
> 60MB, 70MB, 40MB, etc.).  "zpool scrub" is a great way to test this.

Hrm, we haven't run into that, but we're mostly limited by network
speeds in our setup.  "zpool iostat" shows a fairly constant 2 MB read
or write to each of the 24-drives in the servers.  But that's all
rsync usage, and limited by the ADSL links we use.

-- 
Freddie Cash
fjwcash@gmail.com



[Date Prev][Date Next]  [Thread Prev][Thread Next]  [Date Index][Thread Index]