DragonFly users List (threaded) for 2008-07
[
Date Prev][
Date Next]
[
Thread Prev][
Thread Next]
[
Date Index][
Thread Index]
Re: Portable vkernel (emulator)
On Fri, Jul 11, 2008 at 2:28 PM, Matthew Dillon
<dillon@apollo.backplane.com> wrote:
> So there's a wide selection, but no single filesystem has the full set
> of features. If one were to compare HAMMER against all of them as a
> group then, sure, I have a ton of work to do. But if you compare
> HAMMER against any single linux filesystem (well, the non-cluster ones
> for the moment), I think you'd be surprised.
You're right about that, but it's not a matter of what I'd choose.
Adoption is going to happen with geeks and companies looking to
improve efficiency. It'll come down to hype and advertising and
getting it onto other systems besides DragonFly. If it's hyped too
early, people will be disappointed and turn away for a long time.
> We have a small developer and user community and it is not likely
> to increase all that much, even with HAMMER. Keep in mind, though,
> that both Ext and Reiser were originally developed when Linux was
> a much smaller project. Having a large project gives you more eyeballs
> and more people pounding the filesystem, but filesystem development
> isn't really governed by that pounding. The bugs don't get worked out
> any faster with 1000 people pounding on something verses 100 people.
>
> All the major filesystems available today can be traced down to usually
> just one or two primary developers (for each one). All the work flows
> through to them and they only have so many hours in a day to work with.
> Very few people in the world can do filesystem development, it's harder
> to do then OS development in my view.
So it's a good thing there are companies like Red Hat paying people
full time to maintain those implementations. Support contracts get the
users of the software paying for its maintenance, and it works very
well for Linux.
ZFS has Sun employees working around the clock too. And now they have
the FreeBSD community helping out as well, and serving as the first
real-world port. If licensing can be worked out and Linux gets ZFS,
it's easy to see it gaining a huge user base and no shortage of
testers.
> UFS went essentially undeveloped for 20 years because the original
> developers stopped working on it. All the work done on it since FFS
> and BSD4-light (circa 1983-1991-ish) has mostly been in the form of
> hacks. Even the UFS2 work in FreeBSD is really just a minor extension
> to UFS, making fields 64 bits instead of 32 bits and a few other
> little things, and nothing else.
I agree they're hacks, but it's certainly worth appreciating that
things like soft updates and background fsck (the most terrifying hack
of all) have kept FreeBSD relevant even without a proper journalling
file system.
> Ext, Reiser, Coda... I think every single filesystem you see on Linux
> *EXCEPT* the four commercial ones (Veritas, IBM's stuff, Sun's ZFS,
> and SGI's XFS) are almost single-person projects. So I'm not too worried
> about my ability to develop HAMMER :-)
Right, but who's going to test it? ext3 is being tested on almost
every Linux desktop today, and many servers. While it doesn't have
many "developers", looking at patches being applied, lots of people
make contributions if problems do come up. I agree that pound for
pound, DragonFly's user base has a much higher proportion of
developers and eager testers, but they might not necessarily test the
implementations ported to other systems, which are the ones that will
get tens of thousands of users instead of hundreds.
> On to VM. Well, the thing is that they in fact *DO* matter. Only
> an idle system can be hacked into having very low cost. Everything
> is relative. If the cpu requirements of the workloads aren't changing
> very quickly these days then the huge relative cost of the system
> calls becomes less important for those particular workloads, but if
> you have a workload that needs 100% of your machine resources you
> will quickly get annoyed at the VMs and start running your application
> on native hardware.
But that's just what I'm saying. Current-generation virtualisation
definitely has a place, and it never was high-performance computing.
I'm just saying that KVM is a reasonably good solution to Michael's
problem, and nothing you've said contradicts that.
> Yah, I read about the linux work. That was mainly IBM I think,
> though to be truthful it was primarily solved simply by IBM reducing
> the clock interrupt rate, which I think pushed the linux folks to
> move to a completely dynamic timing system.
I think the dynamic timing system was more motivated by power savings
than virtualisation, but it's a nice bonus.
> Similarly for something like FreeBSD or DragonFly, reducing the clock
> rate makes a big difference. DragonFly's VKERNEL drops it down to
> 20Hz.
Would DragonFly be able to implement dynamic ticks as well? Perhaps
it's not a huge priority but it's something people expect for modern
power-efficient systems. It may not matter quite so much when the CPU
is a tiny part of the total system power, but with 8 CPUs it adds up
pretty quickly.
> I don't think virtualization is used for performance reasons, most
> such deployments are going to assume at least a 20% loss in performance
> across the board. The reason virtualization is used is because crazily
> enough it is far, far easier to migrate and hot-swap whole virtualized
> environments then it is to migrate or hot-swap a single process.
>
> Is that nuts? But that's why. CPU power is nearly free, but a loss
> of reliability costs real money. Not so much 100% uptime, just making
> the downtime in the sub-second range, when something fails, is what
> is important. Virtualization turns racks of hardware into commodities
> that can simply be powered up and down at a whim without impacting the
> business. At least as long as we're not talking about financial
> transactions.
Which is what I've been saying. It's just good that stuff like KVM and
Xen are making savings. I'm just not sure what you're trying to argue.
Of course they're not native speed yet, and they probably never will
be until "native" is redefined. But the speed increases are opening up
more and more opportunities, even when speed itself is not the driving
factor.
> No open source OS today is natively clusterable. Not one. Well, don't
> quote me on that :-). I don't think OpenSolaris is, but I don't know
> much about it. Linux sure as hell isn't, it takes reams of hacks to
> get any sort of clustering working on linux and it isn't native to the
> OS. None of the BSDs. Not DragonFly, not yet. Only some of the
> big commercial mainframe OSs have it.
That's probably because nobody really cares. Clustering is almost
universally done on the application level, where it can be optimized
much better to the specific work being done. Mass deployment is a
solved problem. Machines are getting bigger and more scalable and more
parallel and cheaper, significantly weakening the argument for
multiple slow machines clustered together.
> Yah, and it's doable up to a point. It works greats for racks of
> servers, but emulating everything needed for a workstation environment
> is a real mess. VMWare might as well be its own OS, and in that respect
> the hypervisor support that Linux is developing is probably a better
> way to advance the field. VMWare uses it too but VMWare is really a
> paper tiger... it wants to be an OS and a virtualization environment,
> so what happens to it when Linux itself becomes a virtualization
> environment?
The Linux community wants Linux to be everything. And to some degree,
I agree. Just having at least one open source solution for every
problem is a great economic freedom. The network effect of Linux as a
whole is what makes it so much more powerful than any individual
product it eventually replaces. VMware will probably end up rebasing
on Linux to some degree.
http://en.wikipedia.org/wiki/VMware_ESX_Server#Architecture
Oh.
--
Dmitri Nikulin
Centre for Synchrotron Science
Monash University
Victoria 3800, Australia
[
Date Prev][
Date Next]
[
Thread Prev][
Thread Next]
[
Date Index][
Thread Index]