DragonFly BSD
DragonFly users List (threaded) for 2005-09
[Date Prev][Date Next]  [Thread Prev][Thread Next]  [Date Index][Thread Index]

Re: UFS filesystem size limit


From: Andreas Hauser <andy@xxxxxxxxxxxxxxx>
Date: 3 Sep 2005 21:54:49 -0000

dillon wrote @ Sat, 3 Sep 2005 09:49:03 -0700 (PDT):

>     A friend of mine swears by linux, but curses just about every filesystem
>     he tries (and curses UFS as well).  Linux FS's have a lot of hype but
>     the only thing that they really have going for them is the 'instant
>     reboot' feature... if you trust them enough, that is.  Reiser is 
>     unbelievably sensitive to disk errors, to the point where you can lose
>     the whole filesystem when something unexpected happens.  JFS has poor
>     performance.  etc etc.  Linux filesystems are not poster childs.
>     Frankly, anyone who feels a need to put a million files into a single
>     directory gets when they deserve.

For the trust, ext2, ext3 and UFS play in one league for me, when i use
the problems, i had with them, as a basis.

Well JFS uses less CPU than XFS, which will make it faster on a busy server.
But let's not forget, XFS and JFS were not native but imported and
somehow degraded a bit on the move.

Some aggregation would certainly be nice though (less but better filesystems)
and with better licenses for sharing with us :)

If the hardware dies on you you can not trust the FS anymore anyways.
Getting your backups is the much more secure and often faster way.
Hiding the disk problems is not really helping, when with todays hardware,
you can be pretty sure it will certainly die.

>     There are two things I want for UFS:  (1) Nearly instant reboots
>     (without having to depend on softupdates), and (2) an ability to grow
>     or shrink the filesystem.  Both are quite achievable goals.


The most intersting points in the filesystem area for me are:

* Snapshots
  This finally makes backups atomic
  -> no more "file has changed while backuping".
  Second it makes it easy to make the backups available to the users [1], like
  "/var/backup/${YEAR}/${MONTH}/${DAY}" or as some have it "./.snapshot".
  -> no more "I just deleted this file. Can you restore it?"

* Journaling
  If this works out how i hope, then it will be easy to mirror everything to
  a failover sever. No more expensive rsync(1)s or complex mail setups to
  get the mail to the backup mirror. Fast recovery after a crash is nice
  too, but then it shouldn't crash in the first place ;) In high
  availability scenarious you probably prefer the fail over server,
  especially since you want to debug the crash. So 5 minutes for
  fsck or log replay are OK for me, need not be instant.

* volume management
  Growing and shrinking are not so interesting but hot-plugging in another
  disk and extending a life FS that would be great, need not be as complex
  as vinum though. I always liked how that works on True64 (or what's the
  nom du jour). Like "mkdir volume1 && cd volume1 && ln -s /dev/disk1 . &&
  ln -s /dev/disk2 .", there you go.

* Networked FS
  ClusterFS Like GFS or Lustre are interesing and not only for computing
  clusters, e.g. we have 4-8 NFS servers serve one shared GFS here for the
  Homedirs. I am not sure if the journaling can enable such things but afair
  you said something in the direction.
  NFS4[2] is something i badly want and i am sure a year down the road this will
  be a hard requirement at many places. I always imagined i could just
  export my /usr/ports and most problems with ports would have gone away ;)

* excessive usage
  Zettabyte filesystems with a trillion files per dir would be nice :-)
  In the cluster area such things are needed. Look at the problems the CERN
  has with the amount of data coming out of the LHC[3].


Andy

[1]
This really well done in plan9:
http://cm.bell-labs.com/magic/man2html/4/fossil

[2]
There are patches for nearly all BSD systems out there:
ftp://ftp.cis.uoguelph.ca/pub/nfsv4
Though Jeff thinks the OpenSolaris implementation is the better way to go.

[3]
http://public.web.cern.ch/Public/Content/Chapters/AboutCERN/CERNFuture/WhatLHC/WhatLHC-en.html



[Date Prev][Date Next]  [Thread Prev][Thread Next]  [Date Index][Thread Index]