DragonFly kernel List (threaded) for 2008-04
DragonFly BSD
DragonFly kernel List (threaded) for 2008-04
[Date Prev][Date Next]  [Thread Prev][Thread Next]  [Date Index][Thread Index]

Re: dfbsd nfs client - file descriptor leak


From: Matthew Dillon <dillon@xxxxxxxxxxxxxxxxxxxx>
Date: Wed, 23 Apr 2008 09:23:44 -0700 (PDT)

:> The problem is: when accessing files from dfbsd client, nfs server
:> "leaks" file descriptors
:
:Hmm, interesting... nfs servers don't open files (nfsv4 has an Open, but it
:is really a type of file lock and not a POSIX like open).
:
:My thought is that it might be doing a lot of reconnects and ending up
:with lots of sockets on the server? You could take a look at "netstat -a" on
:the server box while this is happenning and see if there are lots of connections
:from the client (or switch to using UDP and see if that makes the problem go
:away).
:
:One of the problems in an NFS client using TCP is deciding how
:long to wait for a response on a TCP connection before giving up and creating
:a new connection. The really old BSD code waited until the TCP layer decided
:the connection was dead, but that could take a very long time. My current
:client (not what is in DragonflyBSD) waits 1 minute, which seems to be
:working out pretty well, but...
:
:Good luck with it, rick

    Well, NFSv3 is a stateless protocol, meaning that it shouldn't be
    possible for the client to hold server resources open.  If there is
    a file descriptor leak on the server side its 99% sure to be a bug
    on the server.

    TCP reconnections should have no effect on the server's descriptor
    handling.  It's really unlikely that DFly would be reconnecting anyway,
    unless you noticed really long stalls while testing.

    The biggest diffrence between DragonFly and FreeBSD's client
    implementation is that the DragonFly client has a positive namecache
    timeout as well as an access cache timeout, and thus may re-stat
    files more often.  

    The default parameters are set very conservatively on DragonFly.  You
    can experiment with them with these sysctl's (in seconds):

    vfs.nfs.access_cache_timeout: 5
    vfs.nfs.neg_cache_timeout: 3
    vfs.nfs.pos_cache_timeout: 3

					-Matt
					Matthew Dillon 
					<dillon@backplane.com>



[Date Prev][Date Next]  [Thread Prev][Thread Next]  [Date Index][Thread Index]