DragonFly BSD
DragonFly kernel List (threaded) for 2004-10
[Date Prev][Date Next]  [Thread Prev][Thread Next]  [Date Index][Thread Index]

Re: Kernel update / DragonFly_Stable tag reverted to September 13th


From: Matthew Dillon <dillon@xxxxxxxxxxxxxxxxxxxx>
Date: Fri, 8 Oct 2004 10:56:16 -0700 (PDT)

:-On [20041008 13:52], Jeroen (koffieyahoo@xxxxxxxxxxx) wrote:
:>I don't really know anything of the VFS code, but would it be easier to 
:>leave the old VFS code alone and write the new VFS code as a sort of "shadow 
:>system" which does not really give rise to disk operations, but just 
:>compares its state with that of the old VFS system and panics (or does 
:>something else noticeable) when it is inconsistent with the state of the old 
:>VFS system? Afterwards when the new code is done you can rip out the old VFS 
:>code in one blow.
:
:I think going this way will lead to insanity.
:
:I am slightly familiar with the VFS code and only because I was helping
:Adrian Chadd at one point with the documentation of it.
:
:Trying to have both the old and new next to eachother and doing comparisons
:of the data will be a huge decrease in performance.
:
:But that's a VFS newbie's take on it.
:
:-- 
:Jeroen Ruigrok van der Werven <asmodai(at)wxs.nl> / asmodai / kita no mono

    Well, my original idea was to implement the new namecache topology and
    then have both the old and new vfs cache APIs use the new topology.
    Thus the code in HEAD right now has cache_lookup() (old API) and
    cache_nlookup() (new API), namei/lookup (old API) and nlookup/resolve
    (new API), and so forth.

    I then started converting things over to the new API on a system call by
    system call basis.  e.g. [l]stat, rmdir, etc.

    That worked fine, *except* for the tiny little (big huge) unintended
    consequence of badly breaking the vnode locking scheme.

    I think the way its going to work is that instead of trying to do the
    stageable work first, that is approaching it from a system call by 
    systemcall basis, that instead I am going to have to do the infrastructure
    first, which is not really stageable, and get that stabilized, and *then*
    go back and convert the system calls over.

    The biggest vfs infrastructure pieces are:

    * vnode locking and reclamation
    * vop_lookup's handling of NAMEI_DELETE lookups (which record side
      information that later VOP_*() functions need)
    * The VOP_RENAME API.

    And, generally, the old API's desire to execute namecache operations
    before knowing whether a VOP has succeeded or not, verses the new API's
    desire to integrate namecache operations with the VOPs.  So, e.g. the
    old RENAME interface tried to purge various portions of the namecache
    before running VOP_RENAME.  The new API will actually *pass* locked
    source and target namecache pointers *TO* VOP_RENAME and VOP_RENAME
    will adjust the accordingly by calling new API cache_*() functions,
    in order to maintain an unbroken topology.

    I've got the first piece almost working, but it is starting to look
    like I'll have to do all three together, plus a bunch of other 
    infrastructure cleanups (like making v_lock mandatory and changing
    the VOP_LOCK and VOP_UNLOCK vectors to only operate on sublayers, which
    only filesystems like nullfs and unionfs really need to do anything
    special for).  Then that whole mess would be stabilized and committed
    in one huge patch set.  After that the remainder can be done piecemeal.

					-Matt
					Matthew Dillon 
					<dillon@xxxxxxxxxxxxx>



[Date Prev][Date Next]  [Thread Prev][Thread Next]  [Date Index][Thread Index]