DragonFly kernel List (threaded) for 2007-02
DragonFly BSD
DragonFly kernel List (threaded) for 2007-02
[Date Prev][Date Next]  [Thread Prev][Thread Next]  [Date Index][Thread Index]

Re: Plans for 1.8+ (2.0?)


From: Bill Hacker <wbh@xxxxxxxxxxxxx>
Date: Thu, 01 Feb 2007 20:48:38 +0800

Tobias Schacht wrote:
On 2/1/07, Matthew Dillon <dillon@apollo.backplane.com> wrote:

No, its a lot more complex then that. There are three basic issues:

* Redundancy in a heavily distributed environment

* Transactional Consistency.

* Cache Coherency and conflict management.

Hm, I wonder how plan9 solved these issues? AFAIK they have at least a snapshots capable fs (long before zfs) and since their scope also is a distributed environment, it may be a good idea to take a look and maybe borrow some ideas?

But I'm really no expert here, so anybody with a clue is invited to
comment on that. ;)

The Plan 9 storage (two distinct types) was 'widely sharable', not clustered, per se.


One might make the case that it had more in common with NFS than clustering.

BUT.. it had a mechanism to insure 'current status' more familiar to table, row, record locking schemes of an RDBMS (which ZFS has a kinship with).

Simple hash-based, these were an order or two of magnitude simpler - hence faster - than ZFS could be on the same hardware.

The consatnt 'snapshot-ing' OTOH, could have placed major strains on the paltry storage of the day (for anyone with less funding than AT&T anyway).

That last part has changed.

With capacity and cost of current HDD, it is probably now faster and cheaper to 'abandon in place' a good deal of stale data than to even bother to go back and look at it at all - let alone clean it up, make decisions as to what to archive, etc.

Not complex, and certainy worth a look for recyclable ideas. her is an analysis of 'in-use' history:

http://plan9.bell-labs.com/who/seanq/p9trace.html

Bill




[Date Prev][Date Next]  [Thread Prev][Thread Next]  [Date Index][Thread Index]