DragonFly kernel List (threaded) for 2009-03
DragonFly BSD
DragonFly kernel List (threaded) for 2009-03
[Date Prev][Date Next]  [Thread Prev][Thread Next]  [Date Index][Thread Index]

HAMMER status WEF March 2009 - outsider viewpoint


From: Bill Hacker <wbh@xxxxxxxxxxxxx>
Date: Thu, 05 Mar 2009 16:11:12 +0800

'Bestoren' again, and cross-posting to the too-seldom-used dragonfly.hammer newsgroup set up for porting...

Outsider viewpoint WEF March 5 2009, hopefully renderd 'stale' soonest:

- Desired:

-- ability to roll-out hammerfs in pilot, if not full-production use for its excellent and inherent fast snapshot/rollback features.


- Current Barriers:


-- limited/no ability to enforce quotas / prevent overflow damage.

-- little/no ability to export sub-dirs as coherent 'chunks', at least with hammerfs-specific tools. 'rsync' and cpdup' still work as always.
or nearly so, as ...


-- above complicated by limited 'awareness' of hammerfs specifics from the viewpoint of some of the legacy tools:

EG:

- 'cpdup' can manage softlinks 'as directed', but scp -r cannot, and may expand several snapshots onto a target.

- 'ls' does not always act as expected, nor return useful info, nor the *same* info if PFS mounts are / are not involved. Likewsie, to a lesser extent 'du'.

Other 'traditional' tools are similarly challenged, may benefit from, OR be even more confused by use of PFS mounts.

Needed: more hammerfs-specific alternatives and/or hammerfs awareness integration. Neither expected overnight.


Workarounds planned for the moment..


- maintain the 'system' on traditional UFS where traditional tools act as they always have. Backing that up is a road well-traveled, versioning can live elsewhere.

- Slice separately-mounted large media for client storage use into smaller-than-optimal sizes for hammerfs. This to reduce the risk of one client overflowing and damaging the storage area of another. Sub-optimal sizing ain't the same as 'useless', but an overfilled disk IS, and has been known to be able panic the system undr at least a few edge-case scenarios.

How so: A 1TB or 1.5TB drive sliced into 50 to 500 GB portions, typically 100-250GB, for the working storage.

- Use of a more optimal entire-device hammerfs for the target of hammer mirror-copy/hammer mirror-stream backup.

Premise is that the limits on the source will insure that the target has no overflow issues. Or 'fewer' anyway.


Seen to be needed 'Real Soon Now':


- 'pluggable' back-end transport choices to hammer mirror-copy / hammer mirror-stream for clustering/ mirroring, to wit:

-- 'raw' Ethernet GigE and 10GigE for local - even roll-cable - use within a rack. The local SAN isolation/trust model, no need or TCP/IP or ssh overheads. More akin to the old dual-controller SCSI chains. Significant throughput improvement should be possible w/o erroring becoming a problem.

-- Infiniband 'verb' drivers (See Glusterfs)

-- iSCSI, eSATA over Ethernet, fibre channel - whatever else can be adapted or implemented 'soonest' and most easily. SSA and FCAL can probably be let for dead...

Side Note:

The 'fuse' approach to (involved fs of choice) doesn't look to ever be much more than a parlour-trick handy for maintenance. Bonnie++ or blogbench too easily drop it in its tracks vs even a basic legacy NFS mount. All the more so if either/both source and target happen to be running anything even the least bit hungry in userspace (Xorg and friends, even if idle).

No need to take that detour.

Hope and trust some food for thought comes out of this.

First we walk. THEN we run....

Regards,

Bill Hacker



[Date Prev][Date Next]  [Thread Prev][Thread Next]  [Date Index][Thread Index]