DragonFly commits List (threaded) for 2008-06
DragonFly BSD
DragonFly commits List (threaded) for 2008-06
[Date Prev][Date Next]  [Thread Prev][Thread Next]  [Date Index][Thread Index]

cvs commit: src/sys/vfs/hammer hammer.h hammer_blockmap.c hammer_btree.c hammer_cursor.h hammer_disk.h hammer_flusher.c hammer_freemap.c hammer_inode.c hammer_io.c hammer_object.c hammer_ondisk.c hammer_recover.c hammer_subs.c hammer_vfsops.c ...


From: Matthew Dillon <dillon@xxxxxxxxxxxxxxxxxxxxxxx>
Date: Sat, 7 Jun 2008 00:41:51 -0700 (PDT)

dillon      2008/06/07 00:41:51 PDT

DragonFly src repository

  Modified files:
    sys/vfs/hammer       hammer.h hammer_blockmap.c hammer_btree.c 
                         hammer_cursor.h hammer_disk.h 
                         hammer_flusher.c hammer_freemap.c 
                         hammer_inode.c hammer_io.c 
                         hammer_object.c hammer_ondisk.c 
                         hammer_recover.c hammer_subs.c 
                         hammer_vfsops.c hammer_vnops.c 
  Log:
  HAMMER 53A/Many: Read and write performance enhancements, etc.
  
  * Add hammer_io_direct_read().  For full-block reads this code allows
    a high-level frontend buffer cache buffer associated with the
    regular file vnode to directly access the underlying storage,
    instead of loading that storage via a hammer_buffer and bcopy()ing it.
  
  * Add a write bypass, allowing the frontend to bypass the flusher and
    write full-blocks directly to the underlying storage, greatly improving
    frontend write performance.  Caveat: See note at bottom.
  
    The write bypass is implemented by adding a feature whereby the frontend
    can soft-reserve unused disk space on the physical media without having
    to interact (much) with on-disk meta-data structures.  This allows the
    frontend to flush high-level buffer cache buffers directly to disk
    and release the buffer for reuse by the system, resulting in very high
    write performance.
  
    To properly associate the reserved space with the filesystem so it can be
    accessed in later reads, an in-memory hammer_record is created referencing
    it.  This record is queued to the backend flusher for final disposition.
    The backend disposes of the record by inserting the appropriate B-Tree
    element and marking the storage as allocated.  At that point the storage
    becomes official.
  
  * Clean up numerous procedures to support the above new features.  In
    particular, do a major cleanup of the cached truncation offset code
    (this is the code which allows HAMMER to implement wholely asynchronous
    truncate()/ftruncate() support.
  
    Also clean up the flusher triggering code, removing numerous hacks that
    had been in place to deal with the lack of a direct-write mechanism.
  
  * Start working on statistics gathering to track record and B-Tree
    operations.
  
  * CAVEAT: The backend flusher creates a significant cpu burden when flushing
    a large number of in-memory data records.  Even though the data itself
    has already been written to disk, there is currently a great deal of
    overhead involved in manipulating the B-Tree to insert the new records.
    Overall write performance will only be modestly improved until these
    code paths are optimized.
  
  Revision  Changes    Path
  1.74      +27 -10    src/sys/vfs/hammer/hammer.h
  1.15      +253 -26   src/sys/vfs/hammer/hammer_blockmap.c
  1.50      +3 -5      src/sys/vfs/hammer/hammer_btree.c
  1.20      +9 -0      src/sys/vfs/hammer/hammer_cursor.h
  1.35      +4 -0      src/sys/vfs/hammer/hammer_disk.h
  1.19      +24 -34    src/sys/vfs/hammer/hammer_flusher.c
  1.13      +6 -0      src/sys/vfs/hammer/hammer_freemap.c
  1.65      +55 -18    src/sys/vfs/hammer/hammer_inode.c
  1.34      +120 -2    src/sys/vfs/hammer/hammer_io.c
  1.61      +427 -88   src/sys/vfs/hammer/hammer_object.c
  1.50      +23 -0     src/sys/vfs/hammer/hammer_ondisk.c
  1.20      +1 -2      src/sys/vfs/hammer/hammer_recover.c
  1.23      +26 -0     src/sys/vfs/hammer/hammer_subs.c
  1.39      +8 -0      src/sys/vfs/hammer/hammer_vfsops.c
  1.59      +149 -146  src/sys/vfs/hammer/hammer_vnops.c


http://www.dragonflybsd.org/cvsweb/src/sys/vfs/hammer/hammer.h.diff?r1=1.73&r2=1.74&f=u
http://www.dragonflybsd.org/cvsweb/src/sys/vfs/hammer/hammer_blockmap.c.diff?r1=1.14&r2=1.15&f=u
http://www.dragonflybsd.org/cvsweb/src/sys/vfs/hammer/hammer_btree.c.diff?r1=1.49&r2=1.50&f=u
http://www.dragonflybsd.org/cvsweb/src/sys/vfs/hammer/hammer_cursor.h.diff?r1=1.19&r2=1.20&f=u
http://www.dragonflybsd.org/cvsweb/src/sys/vfs/hammer/hammer_disk.h.diff?r1=1.34&r2=1.35&f=u
http://www.dragonflybsd.org/cvsweb/src/sys/vfs/hammer/hammer_flusher.c.diff?r1=1.18&r2=1.19&f=u
http://www.dragonflybsd.org/cvsweb/src/sys/vfs/hammer/hammer_freemap.c.diff?r1=1.12&r2=1.13&f=u
http://www.dragonflybsd.org/cvsweb/src/sys/vfs/hammer/hammer_inode.c.diff?r1=1.64&r2=1.65&f=u
http://www.dragonflybsd.org/cvsweb/src/sys/vfs/hammer/hammer_io.c.diff?r1=1.33&r2=1.34&f=u
http://www.dragonflybsd.org/cvsweb/src/sys/vfs/hammer/hammer_object.c.diff?r1=1.60&r2=1.61&f=u
http://www.dragonflybsd.org/cvsweb/src/sys/vfs/hammer/hammer_ondisk.c.diff?r1=1.49&r2=1.50&f=u
http://www.dragonflybsd.org/cvsweb/src/sys/vfs/hammer/hammer_recover.c.diff?r1=1.19&r2=1.20&f=u
http://www.dragonflybsd.org/cvsweb/src/sys/vfs/hammer/hammer_subs.c.diff?r1=1.22&r2=1.23&f=u
http://www.dragonflybsd.org/cvsweb/src/sys/vfs/hammer/hammer_vfsops.c.diff?r1=1.38&r2=1.39&f=u
http://www.dragonflybsd.org/cvsweb/src/sys/vfs/hammer/hammer_vnops.c.diff?r1=1.58&r2=1.59&f=u



[Date Prev][Date Next]  [Thread Prev][Thread Next]  [Date Index][Thread Index]