DragonFly users List (threaded) for 2007-07
DragonFly BSD
DragonFly users List (threaded) for 2007-07
[Date Prev][Date Next]  [Thread Prev][Thread Next]  [Date Index][Thread Index]

Re: Open Mosix

From: "Martin P. Hellwig" <xng@xxxxxxxxx>
Date: Wed, 18 Jul 2007 19:37:30 +0200

Matthew Dillon wrote:
:I think it is not irrelevant to mention here the announcement:
:"Moshe Bar, openMosix founder and project leader, has announced plans to end
:the openMosix Project effective March 1, 2008. : :The increasing power and availability of low cost multi-core processors is
:rapidly making single-system image (SSI) Clustering less of a factor in
:computing. The direction of computing is clear and key developers are
:moving into newer virtualization approaches and other projects. "

    Well, I don't think multi-core really solves the same problem that SSI
    does, but I also firmly believe that doing SSI properly requires complete
    integration into the kernel to really be effective.

Matthew Dillon <dillon@backplane.com>

That could have some nice backfire for this project, if users of OpenMosix don't step in and maintain the project it could lead to a situation where users are looking for something comparable and end there search with DF (when it has these capabilities) :-)

It would definitely help the spread of this project and for BSD in general, most administrators I know that have serious worked with multiple project end up preferring some kind of BSD over Linux. Most of the times because they have a 'distro' which in the whole part is covered. That doesn't reduce maintenance when almost everything is regulated and automated but it does help to simplify the setup process of the automations.

By the way, how far are we planning to get that SSI going, is it like having a single VKernel over X amount of hosts? How do you plan to do something like network access to that kernel? I would be thrilled if it ends up like sharing a NIC from the real host to the VKernel and that the VKernel itself could team up all the NIC's and thus assign a single IP to it.

If storage can follow about the same path as the above mentioned NIC's with an additional option to control the amount of duplications of the files (like /usr should be at least available on 3 nodes preferable spread to the most distant geographic locations from each other) then it would definitely be a killer feature. Well at least for me.

I am still hoping for a system where I can just do the package install and maintenance for that system as I was working on a single machine but are capable to power off a node from the network do some hardware maintenance on that system and then power it on again. While booting the node it reattaches itself to the cluster and happily share it resources.


[Date Prev][Date Next]  [Thread Prev][Thread Next]  [Date Index][Thread Index]