DragonFly BSD
DragonFly kernel List (threaded) for 2004-11
[Date Prev][Date Next]  [Thread Prev][Thread Next]  [Date Index][Thread Index]

vfsx17.patch available - continuing vfs work (expert developers only)


From: Matthew Dillon <dillon@xxxxxxxxxxxxxxxxxxxx>
Date: Thu, 4 Nov 2004 23:12:20 -0800 (PST)

    The next patch is ready.  I haven't dealt with the nfs server, unionfs,
    or nullfs yet, I've been working around the edges cleaning things up.

	fetch http://leaf.dragonflybsd.org/~dillon/vfsx17.patch

    This patch cleans up the relookup() code used by *_rename() VOPs, removes
    a great deal more of the 'old' API and old infrastructure, including
    nearly all the CNP_* flags, and simplifies the *_lookup() VOPs.  It also
    documents a bunch of stuff, makes systat's namecache stats work again,
    and fixes a few bugs that David Rhodus found and a few more that I found.
    As of this patch the new namecache topology is officially an 'unbroken'
    topology, with no special cases left.

    I took a pass at the NFS server but the result was such a big mess I
    backed it out of my local tree in disgust.  The issue with the NFS server
    is that it is given a file handle (basically an inode number) by the
    remote client which it uses to obtain the vnode.  Unfortunately this
    means that it might obtain a vnode that is way out in the middle of
    nowhere, with no namecache topology leading to it.

    I tried to solve the problem by creating a 'dummy' namecache record
    for the vnode but the lack of knowledge about the filename of the vnode
    resulted in duplicate namecache records being created which required a
    mess of code to resolve.  It also didn't solve the ".." lookup problem
    which NFS servers have to be able to handle.

    I am going to try again tomorrow, this time by doing a brute-force
    ".." / directory-scan loop to resolve the namecache topology leading to
    the vnode the client wanted.  This sounds burdensome but, in fact, while
    it has a lot of overhead it doesn't actually have to happen very often
    because the namecache is, well, cached.  So the entry in question is
    already likely to be in the cache and if it isn't its parent directory
    almost certainly is.  So despite the brute-force nature of the solution 
    I think this will be an efficient solution.

    Maintaining an unbroken topology also has an interesting side effect in
    that it becomes possible to safely export multiple subdirectories of
    the same filesystem with *different* export permissions, without having
    to hand the whole filesytem to the client... something that cannot be
    done with most existing NFS servers.  I don't plan on implementing the
    feature myself, I have to move onto the other items on my list, but I 
    will write it up for anyone who wants to have a go at it once I get
    the NFS server working properly again.

    Still TODO:	NFS server, unionfs, nullfs. 

					-Matt
					Matthew Dillon 
					<dillon@xxxxxxxxxxxxx>



[Date Prev][Date Next]  [Thread Prev][Thread Next]  [Date Index][Thread Index]