DragonFly BSD
DragonFly users List (threaded) for 2010-04
[Date Prev][Date Next]  [Thread Prev][Thread Next]  [Date Index][Thread Index]

Re: Hammer ghost slave PFS


From: Antonio Huete Jimenez <ahuete.devel@xxxxxxxxx>
Date: Sun, 11 Apr 2010 13:12:32 +0200

In fact you could access any of your slave PFSs with TID -1:

# hammer pfs-status @@-1:00011
@@0x0000000000000000:00011      PFS #11 {
    sync-beg-tid=0x0000000000000001
    sync-end-tid=0x0000000127fbb1c0
    shared-uuid=9970d0f9-0f10-11df-acc1-9bd6198bd0ab
    unique-uuid=67b036ac-455a-11df-8910-73e089715096
    slave
    label=""
    prune-min=00:00:00
    operating as a SLAVE
    snapshots directory defaults to /var/hammer/<pfs>
}

So you wouldn't need the patch.

Cheers,
Antonio Huete


2010/4/11 Antonio Huete Jimenez <ahuete.devel@gmail.com>:
> Steve,
>
> Can you please try this patch?
>
> diff --git a/sbin/hammer/cmd_info.c b/sbin/hammer/cmd_info.c
> index 946d3c8..12e5246 100644
> --- a/sbin/hammer/cmd_info.c
> +++ b/sbin/hammer/cmd_info.c
> @@ -160,7 +160,7 @@ show_info(char *path)
>
>        /* Pseudo-filesystem information */
>        fprintf(stdout, "PFS information\n");
> -       fprintf(stdout, "\tPFS ID  Mode    Snaps  Mounted on\n");
> +       fprintf(stdout, "\tPFS ID  Mode    Snaps  End TID
> Mounted on\n");
>
>        while(pfs_id < HAMMER_MAX_PFS) {
>                bzero(&pfs, sizeof(pfs));
> @@ -181,12 +181,15 @@ show_info(char *path)
>                        sc = count_snapshots(fd, info.version, pfs_od.snapshots,
>                            mountedon);
>
> -                       fprintf(stdout, "\t%6d  %-6s %6d  ",
> -                           pfs_id, (ismaster ? "MASTER" : "SLAVE"), sc);
> +                       fprintf(stdout, "\t%6d  %-6s %6d  0x%016jx  ",
> +                           pfs_id, (ismaster ? "MASTER" : "SLAVE"), sc,
> +                           pfs.ondisk->sync_end_tid);
> +
>                        if (mountedon)
>                                fprintf(stdout, "%s", mountedon);
>                        else
>                                fprintf(stdout, "not mounted");
> +
>                        fprintf(stdout, "\n");
>                }
>                pfs_id++;
>
>
> --
>
> This will show the End TID of all PFSs, including your slaves:
>             9  SLAVE      50  0x000000010a543830  not mounted
>            10  SLAVE      50  0x0000000000000001  not mounted
>
> So you could see the status of the PFS slave 9 with this:
>
> # hammer pfs-status /@@0x000000010a543830:00009
> /@@0x000000010a543830:00009     PFS #9 {
>    sync-beg-tid=0x0000000000000001
>    sync-end-tid=0x000000010a543830
>    shared-uuid=9970d0f9-0f10-11df-acc1-9bd6198bd0ab
>    unique-uuid=91d11477-1009-11df-80a0-9bd6198bd0ab
>    slave
>    label=""
>    prune-min=00:00:00
>    operating as a SLAVE
>    snapshots directory defaults to /var/hammer/<pfs>
> }
>
> If you want to wipe it out, just do this:
>
> # ln -s @@0x000000010a543830:00009 /todestroy
> # hammer pfs-destroy /todestroy
>
> Probably there are better ways, but well, I hope this may help you :-)
>
> Cheers,
> Antonio Huete
>
> 2010/4/11 Steve O'Hara-Smith <steve@sohara.org>:
>>        Hi Antonio,
>>
>>        There is no /pfs dir - the two slaves were created by hammer
>> mirror-copy and live in <fs>/backup/ which looks like this:
>>
>> ls -l /data/backup/
>> total 0
>> lrwxr-xr-x  1 root  wheel  10 Dec 12 06:45 df1.marelmo.com-home ->
>> @@0x0000000878898420:00002
>> lrwxr-xr-x  1 root  wheel  10 Feb  4 13:12
>> steve.marelmo.com-home -> @@0x000000010af0e2e0:00004
>>
>>
>> On Sun, 11 Apr 2010 10:40:12 +0200
>> Antonio Huete Jimenez <ahuete.devel@gmail.com> wrote:
>>
>>> Hi Steve,
>>>
>>> What is it listed in /pfs dir?
>>>
>>> Cheers,
>>> Antonio Huete
>>>
>>> 2010/4/11 Steve O'Hara-Smith <steve@sohara.org>:
>>> >        Hi,
>>> >
>>> >        I have a ~1TB hammer filesystem with two slave PFSs on it that
>>> > seems to be using more space than I can account for. I was looking into
>>> > it when I noticed this in hammer info
>>> >
>>> >
>>> >        PFS ID  Mode    Snaps  Mounted on
>>> >             0  MASTER      3  /data
>>> >             1  SLAVE       3  not mounted
>>> >             2  SLAVE       3  not mounted
>>> >             4  SLAVE       3  not mounted
>>> >
>>> >        Three slave PFSs listed - but there are only two (unless I'm
>>> > going crazy). How can I find out which ones are the two I know about
>>> > and how can I get rid of the extra one ?
>>> >
>>> >        Would I be best to simply rebuild the filesystem and start
>>> > again ? This filesystem started life as a version 1 and has been
>>> > upgraded to version 4 over time so it's seen quite a few versions of
>>> > DFly since it was first built.
>>> >
>>> > --
>>> > Steve O'Hara-Smith                          |   Directable Mirror Arrays
>>> > C:>WIN                                      | A better way to focus the
>>> > sun The computer obeys and wins.                |    licences available
>>> > see You lose and Bill collects.                 |
>>> >  http://www.sohara.org/
>>> >
>>
>>
>> --
>> Steve O'Hara-Smith                          |   Directable Mirror Arrays
>> C:>WIN                                      | A better way to focus the sun
>> The computer obeys and wins.                |    licences available see
>> You lose and Bill collects.                 |    http://www.sohara.org/
>>
>




[Date Prev][Date Next]  [Thread Prev][Thread Next]  [Date Index][Thread Index]