DragonFly BSD
DragonFly users List (threaded) for 2010-02
[Date Prev][Date Next]  [Thread Prev][Thread Next]  [Date Index][Thread Index]

Re: hammer upgrade clarification


From: Siju George <sgeorge.ml@xxxxxxxxx>
Date: Wed, 3 Feb 2010 10:52:25 +0530

Thank you so much Matt for the detailed explanation :-)


On Wed, Feb 3, 2010 at 12:12 AM, Matthew Dillon
<dillon@apollo.backplane.com> wrote:
>    Ok, a few things.  First, note that the HAMMER fs in the DragonFly
>    2.4.x RELEASE only goes up to version 2.  The HAMMER fs in the
>    development branch goes up to version 4.  So if you upgrade to
>    version 4 you have to use a development kernel and you cannot go
>    back to a 2.4.x release kernel.
>

Since I am running

 2.5.1-DEVELOPMENT DragonFly v2.5.1.672.gf81ef-DEVELOPMENT

I went directly to hammer version 4.
The upgrade took only a few seconds. Is that normal? The file system
is  454G   with 198G used. So fast?

Then after the upgrade I got the message

===============================
NOTE!  Please run 'hammer cleanup' to convert the
<pfs>/snapshots directories to the new meta-data
format. Once converted configuration data will
no longer resides in <pfs>/snapshots and you can
even rm -rf it entirely if you want.
===============================

Actually after 'hammer cleanup the " <pfs>/snapshots " directory is
completely missing. So I guess there is no need to 'rm -rf' anything.

Then i got this message.

===============================================================
cleanup /Backup1/VersionControl -(migrating)  (16 snapshots)index 16
 (16 snapshots)index 16
 (4 snapshots)index 4
 HAMMER UPGRADE: Moving snapshots
        Moving snapshots from /Backup1/VersionControl/snapshots to
/var/hammer/Backup1/VersionControl
 handle PFS #2 using /var/hammer/Backup1/VersionControl
           snapshots - skip
               prune - skip
           rebalance - skip
             reblock - skip
              recopy - skip
================================================================

What does

 (16 snapshots)index 16
 (4 snapshots)index 4

mean? I have more than 16 snapshots in the above PFS.

Also my /var partition is UFS1 so is there any problem in storing the
PFS snapshots there?

Now the part I find most interesting is that ' hammer cleanup' did not
mention anything about the slave PFSs :-(

I have 3 slave PFSs none of which got mentioned during the 'hammer cleanup'.

One of the Slave PFSs is

dfly-bkpsrv# hammer pfs-status /Backup2/VersionControl/
/Backup2/VersionControl/        PFS #2 {
    sync-beg-tid=0x0000000000000001
    sync-end-tid=0x00000001782a2f80
    shared-uuid=7c29ba54-abf8-11de-ab74-011617202aa6
    unique-uuid=fd23c29f-abf8-11de-ab74-011617202aa6
    slave
    label=""
    prune-min=00:00:00
    operating as a SLAVE
    snapshots directory defaults to /var/hammer/<pfs>
}

Though it says 'snapshots directory defaults to /var/hammer/<pfs>'
there is nothing in that directory.

dfly-bkpsrv# cd /var/hammer/Backup2/
dfly-bkpsrv# ls
. prune.period           .reblock.period         .snapshots.period
. rebalance.period       .recopy.period

The snapshots are still found in

dfly-bkpsrv# cd /Backup2/VersionControl/snapshots/
dfly-bkpsrv# pwd
/Backup2/pfs/@@0x00000001782a2f80:00002/snapshots
dfly-bkpsrv# ls
. prune.period           snap-20091218-0316      snap-20100112-0404
. rebalance.period       snap-20091219-0316      snap-20100115-0318
. reblock.period         snap-20091222-0316      snap-20100116-0316
. recopy.period          snap-20091223-0317      snap-20100119-0317
. snapshots.period       snap-20091224-0318      snap-20100120-0316
config                  snap-20091229-0317      snap-20100121-0317
snap-20091208-0314      snap-20091230-0317      snap-20100122-0318
snap-20091209-0315      snap-20091231-0317      snap-20100123-0318
snap-20091210-0315      snap-20100101-0318      snap-20100126-0317
snap-20091211-0315      snap-20100105-0318      snap-20100128-0318
snap-20091212-0425      snap-20100106-0318      snap-20100129-0317
snap-20091215-0316      snap-20100107-0318      snap-20100130-0318
snap-20091216-0317      snap-20100108-0319      snap-20100202-0318
snap-20091217-0316      snap-20100109-0320      snap-20100203-0318

This is the same in case of the PFS

dfly-bkpsrv# hammer pfs-status /Backup2/Data/
/Backup2/Data/  PFS #1 {
    sync-beg-tid=0x0000000000000001
    sync-end-tid=0x00000001782a2f80
    shared-uuid=94063778-6ac7-11de-a994-011617202aa6
    unique-uuid=b015e877-6ac7-11de-a994-011617202aa6
    slave
    label=""
    prune-min=00:00:00
    operating as a SLAVE
    snapshots directory defaults to /var/hammer/<pfs>
}
dfly-bkpsrv# ls -l /var/hammer/Backup2
total 10
-rw-r--r--  1 root  wheel  11 Feb  3 03:01 .prune.period
-rw-r--r--  1 root  wheel  11 Feb  3 03:01 .rebalance.period
-rw-r--r--  1 root  wheel  11 Feb  3 03:01 .reblock.period
-rw-r--r--  1 root  wheel  11 Jan  5 03:01 .recopy.period
-rw-r--r--  1 root  wheel  11 Jul  7  2009 .snapshots.period
dfly-bkpsrv# ls -l /Backup2/Data/snapshots/
total 0
-rw-r--r--  1 root  wheel  11 Feb  3 03:03 .prune.period
-rw-r--r--  1 root  wheel  11 Feb  3 03:18 .rebalance.period
-rw-r--r--  1 root  wheel  11 Feb  3 03:16 .reblock.period
-rw-r--r--  1 root  wheel  11 Jan 12 04:02 .recopy.period
-rw-r--r--  1 root  wheel  11 Jul 14  2009 .snapshots.period
-rw-r--r--  1 root  wheel  85 Jul 11  2009 config
dfly-bkpsrv#

For another PFSs

dfly-bkpsrv# hammer pfs-status /Backup2/test/
/Backup2/test/  PFS #3 {
    sync-beg-tid=0x0000000000000001
    sync-end-tid=0x000000013f644d60
    shared-uuid=9043570e-b3d9-11de-9bef-011617202aa6
    unique-uuid=97d77f53-b3da-11de-9bef-011617202aa6
    slave
    label=""
    prune-min=00:00:00
    operating as a SLAVE
    snapshots directory defaults to /var/hammer/<pfs>
}

not only that the snapshots directory  /var/hammer/<pfs> is missing
the snapshot director <pfs>/snaphots itslef is missing :-(

dfly-bkpsrv# ls -l /var/hammer/Backup2
total 10
-rw-r--r--  1 root  wheel  11 Feb  3 03:01 .prune.period
-rw-r--r--  1 root  wheel  11 Feb  3 03:01 .rebalance.period
-rw-r--r--  1 root  wheel  11 Feb  3 03:01 .reblock.period
-rw-r--r--  1 root  wheel  11 Jan  5 03:01 .recopy.period
-rw-r--r--  1 root  wheel  11 Jul  7  2009 .snapshots.period

dfly-bkpsrv# ls -l /Backup2/test/
total 0
-rw-r--r--  1 root  wheel  0 Oct  8 12:48 test-file

Did I lose all the snapshots? Even if snapshots are disabled I guess
'hammer cleanup' will not delete the configuration files in the
snapshots directory?

the Slaves all have one thing in common. The are all on a differrent
physical disk. Is that the reason why hammer cleanup did nothing to
them? But the 'hammer pfs-status' does say that the snapshot
directories are in /var/hammer/<pfs> ?

Also is it a good Idea to store all snapshots of PFSs spread through
out many differrent disk on a single disk? If that disk is lost then
all the snapshots of all the PFSs are also lost with it isn't it? Is
not storing the snapshots inside the PFSs a better idea?

Finally my 'hammer cleanup' output after upgrade.

===================================================================

dfly-bkpsrv# hammer version-upgrade /Backup2 4
hammer version-upgrade: succeeded
NOTE!  Please run 'hammer cleanup' to convert the
<pfs>/snapshots directories to the new meta-data
format.  Once converted configuration data will
no longer resides in <pfs>/snapshots and you can
even rm -rf it entirely if you want.
dfly-bkpsrv# hammer version-upgrade /Backup1 4
hammer version-upgrade: succeeded
NOTE!  Please run 'hammer cleanup' to convert the
<pfs>/snapshots directories to the new meta-data
format.  Once converted configuration data will
no longer resides in <pfs>/snapshots and you can
even rm -rf it entirely if you want.

dfly-bkpsrv# hammer cleanup
cleanup /Backup1             -(migrating)  HAMMER UPGRADE: Moving snapshots
        Moving snapshots from /Backup1/snapshots to /var/hammer/Backup1
 handle PFS #0 using /var/hammer/Backup1
           snapshots - disabled
               prune - skip
             reblock - skip
              recopy - skip
           rebalance - skip
cleanup /Backup2             -(migrating)  HAMMER UPGRADE: Moving snapshots
        Moving snapshots from /Backup2/snapshots to /var/hammer/Backup2
 handle PFS #0 using /var/hammer/Backup2
           snapshots - disabled
               prune - skip
             reblock - skip
              recopy - skip
           rebalance - skip
cleanup /Backup1/Data        -(migrating)  HAMMER UPGRADE: Moving snapshots
        Moving snapshots from /Backup1/Data/snapshots to
/var/hammer/Backup1/Data
 handle PFS #1 using /var/hammer/Backup1/Data
           snapshots - disabled
               prune - skip
             reblock - skip
              recopy - skip
           rebalance - skip
cleanup /Backup1/VersionControl -(migrating)  (16 snapshots)index 16
 (16 snapshots)index 16
 (4 snapshots)index 4
 HAMMER UPGRADE: Moving snapshots
        Moving snapshots from /Backup1/VersionControl/snapshots to
/var/hammer/Backup1/VersionControl
 handle PFS #2 using /var/hammer/Backup1/VersionControl
           snapshots - skip
               prune - skip
           rebalance - skip
             reblock - skip
              recopy - skip
cleanup /Backup1/test        -(migrating)  (16 snapshots)index 16
 (16 snapshots)index 16
 (4 snapshots)index 4
 HAMMER UPGRADE: Moving snapshots
        Moving snapshots from /Backup1/test/snapshots to
/var/hammer/Backup1/test
 handle PFS #3 using /var/hammer/Backup1/test
           snapshots - skip
               prune - skip
           rebalance - skip
             reblock - skip
              recopy - skip
dfly-bkpsrv# uname -a
DragonFly dfly-bkpsrv.hifxchn2.local 2.5.1-DEVELOPMENT DragonFly
v2.5.1.672.gf81ef-DEVELOPMENT #17: Tue Feb  2 13:37:41 IST 2010
root@dfly-bkpsrv.hifxchn2.local:/usr/obj/usr/src/sys/GENERIC  i386

dfly-bkpsrv# df -h
Filesystem                Size   Used  Avail Capacity  Mounted on
/dev/ad4s1a               1.0G   207M   720M    22%    /
devfs                     1.0K   1.0K     0B   100%    /dev
/dev/ad4s1d               1.0G    42K   927M     0%    /home
/dev/ad4s1e               1.0G    42K   927M     0%    /tmp
/dev/ad4s1f               4.9G   3.2G   1.4G    70%    /usr
/dev/ad4s1g               1.0G   115M   813M    12%    /var
procfs                    4.0K   4.0K     0B   100%    /proc
Backup1                   454G   198G   256G    44%    /Backup1
Backup2                   454G   226G   228G    50%    /Backup2
/Backup1/pfs/@@-1:00001   454G   198G   256G    44%    /Backup1/Data
/Backup1/pfs/@@-1:00002   454G   198G   256G    44%    /Backup1/VersionControl
/Backup1/pfs/@@-1:00003   454G   198G   256G    44%    /Backup1/test
/dev/ad6s1a               1.0G   206M   722M    22%    /mnt/2ndDisk
/dev/ad6s1d               1.0G    20K   927M     0%    /mnt/2ndDisk/home
/dev/ad6s1e               1.0G    24K   927M     0%    /mnt/2ndDisk/tmp
/dev/ad6s1f               4.9G   2.9G   1.7G    63%    /mnt/2ndDisk/usr
/dev/ad6s1g               1.0G   9.5M   918M     1%    /mnt/2ndDisk/var
dfly-bkpsrv# cat /etc/fstab
# Device                Mountpoint      FStype  Options         Dump    Pass#
/dev/ad4s1a             /               ufs     rw              1       1
/dev/ad4s1d             /home           ufs     rw              2       2
/dev/ad4s1e             /tmp            ufs     rw              2       2
/dev/ad4s1f             /usr            ufs     rw              2       2
/dev/ad4s1g             /var            ufs     rw              2       2
/dev/ad4s1b             none            swap    sw              0       0
proc                    /proc           procfs  rw              0       0
#
#Hammer Partitions - ad4s1h has master pfs, ad6s1h has slave pfs################
/dev/ad4s1h             /Backup1        hammer  rw              2       2
/dev/ad6s1h             /Backup2        hammer  rw              2       2
#pfs null_mount
/Backup1/pfs/Data      /Backup1/Data    null    rw              0       0
/Backup1/pfs/VersionControl /Backup1/VersionControl null rw     0       0
/Backup1/pfs/test      /Backup1/test    null    rw              0       0
##############################################################################
#2nd Disk partitions mounted for chroot#######################################
/dev/ad6s1a             /mnt/2ndDisk    ufs     rw              1       1
/dev/ad6s1d        /mnt/2ndDisk/home    ufs     rw              2       2
/dev/ad6s1e        /mnt/2ndDisk/tmp     ufs     rw              2       2
/dev/ad6s1f        /mnt/2ndDisk/usr     ufs     rw              2       2
/dev/ad6s1g        /mnt/2ndDisk/var     ufs     rw              2       2
/dev/ad6s1b             none            swap    sw              0       0
You have new mail.
dfly-bkpsrv#

dfly-bkpsrv# ls -l /Backup2
total 0
lrwxr-xr-x  1 root  wheel  17 Jul  7  2009 Data -> /Backup2/pfs/Data
lrwxr-xr-x  1 root  wheel  27 Sep 28 12:08 VersionControl ->
/Backup2/pfs/VersionControl
drwxr-xr-x  1 root  wheel   0 Oct  8 12:47 pfs
lrwxr-xr-x  1 root  wheel  17 Oct  8 12:48 test -> /Backup2/pfs/test
dfly-bkpsrv# ls -l /Backup2/pfs/
total 0
lrwxr-xr-x  1 root  wheel  10 Jul  7  2009 Data -> @@0x00000001782a2f80:00001
lrwxr-xr-x  1 root  wheel  10 Sep 28 12:04 VersionControl ->
@@0x00000001782a2f80:00002
lrwxr-xr-x  1 root  wheel  10 Oct  8 12:47 test -> @@0x000000013f644d60:00003
dfly-bkpsrv#

Mirroring through

dfly-bkpsrv# cat /etc/rc.local
#Start pfs mirroring
(cd /root/adm; /usr/bin/lockf -k -t 0 .lockfile ./hms) &

#Start BackupPC
su backuppc -c '/usr/pkg/bin/BackupPC/bin/BackupPC -d' &

#Start OpenNTPD
/usr/pkg/sbin/ntpd -s &
dfly-bkpsrv# cat /root/adm/hms
hammer mirror-stream /Backup1/Data /Backup2/Data &
hammer mirror-stream /Backup1/VersionControl /Backup2/VersionControl
dfly-bkpsrv#

Thank you so much

--Siju



[Date Prev][Date Next]  [Thread Prev][Thread Next]  [Date Index][Thread Index]