DragonFly users List (threaded) for 2009-02
DragonFly BSD
DragonFly users List (threaded) for 2009-02
[Date Prev][Date Next]  [Thread Prev][Thread Next]  [Date Index][Thread Index]

Re: Fwd: EUREKA - was the 'why' of pseudofs


From: Bill Hacker <wbh@xxxxxxxxxxxxx>
Date: Thu, 19 Feb 2009 01:51:10 +0800

Colin Adams wrote:
---------- Forwarded message ----------
From: Colin Adams <colinpauladams@googlemail.com>
Date: 2009/2/18
Subject: Re: EUREKA - was the 'why' of pseudofs


2009/2/18 Bill Hacker <wbh@conducive.org>:
Proven pattern among Odontata, too:

http://ecoevo.uvigo.es/Olalla/index_en.htm

That should be Odonata (= tooth-jawed) (Dragonflies and damselflies to the rest of you - i.e Bill is trying to make the posting relevant to DragonFly BSD).

Thanks - typing and proofing was easier when I still had the sight of two eyes...

;-)


Incidentally, I looked at that web page, and by a curious coincidence, I had an email this week from the guy she mentions as her PhD supervisor - as he is the webmaster of the Worldwide Dragonfly Association, and I was complaining about broken links.


'small world..'


Anyway, thanks for the link. I knew about the parthogenetic population
of Ischnura hastata on the Azores, but I didn't know there was a
downloadable thesis on the sunject, so I've just grabbed it.

It's really time I actually started using DragonFly (perhaps to port
GHC to it, as I am programming in Haskell these days). Is it available
64-bit yet?


Dunno. We've been 'greening down' to VIA CPU for lack of enough UPS
budget in the Data Centre, and their first 64-bit is still scarce.

But ISTR at least a couple of the devel team run on AMD-64..

What I've got to sort is whether/how soon DFY uses/will use the in-built
VIA hardware encryption engine.

I've seen tests showing it to need only 5% of the resources a
general-purpose CPU needs for the same encryption/decryption workload,
ergo letting the lowly VIA punch well above its weight in an
increasingly ssh/TLS'ed world. IF the algorithms it supports are among
the choices, anyway...

Meanwhile - and I expect this was already well-known among the
cognoscenti - scp'ing vs mirror-copy'ing from a *single* as-current PFS
snapshot, shows hammerfs on a laptop - especially one that has slept
through four days worth of cron's reblock-prune, can 'punch above its
weight'also. But in a different way.

For kicks, I've scp'ed from the root of /pfs (/pfs/usr ...) as well as
from the mount-point of each individual (virtual) mount (/usr..).

Naturally, scp -r is expanding the snapshots retained over a four day
period.

Predictable result?

- Four copies on the target from /pfs/usr et al, PLUS the ONE copy from
/usr.  Same files, near-zero actually changed, save for /var/log.  But
scp -r cannot know that there was no change, so....

Five times the storage space needed on the target as on the original.

Glad it was not three weeks...

If a man won't take fishing instruction, just let him figure out on his
own how to fish....

. .. and he'll much better appreciate a fish 'n chips shop...

;-)

Bill




[Date Prev][Date Next]  [Thread Prev][Thread Next]  [Date Index][Thread Index]