DragonFly BSD
DragonFly kernel List (threaded) for 2010-02
[Date Prev][Date Next]  [Thread Prev][Thread Next]  [Date Index][Thread Index]

Re: kernel work week of 3-Feb-2010 HEADS UP (interleaved swap test)


From: Matthew Dillon <dillon@xxxxxxxxxxxxxxxxxxxx>
Date: Mon, 8 Feb 2010 09:32:30 -0800 (PST)

:Are you using a RAID-0 (striping) here? Or is there a way to tell dumpon to
:use
:more than one device for swap caching?

    The swap work I did many years ago before DFly split off hasn't changed
    in all tha time.  It automatically uses 64K stripes and supports up
    to four swap devices.  Total swap space is limited on 32-bit builds
    (defaults to 32G, can be raised higher but you risk blowing out KVM).
    On 64-bit builds tota swap space is limited to 512G by default and
    can be raised up to a terrabyte if desired.

:I'd expect random reads from files to profit much more than linear reads
:when
:compared against hard disks. Do you share this assumption? Do you have any
:numbers on this kind of benchmark? Can you think of any other real-world
:benchmarks that would make sense to execute on such a machine?
:I could do some testing on a big machine, equipped with 2x120 GB SSD, once
:I
:get the network controller to work (82576 Intel PRO/1000 not working on
:DragonFly).
:
:Regards,
:
:  Michael

    Theoretically SSDs should scale the performance for both linear and
    random IO.  For a SSD since 'seeks' aren't a big factor it doesn't
    really matter whether a small or large stripe size is used.  i.e.
    you don't get penalized for splitting a large I/O across two devices
    whether random or linear.

    In reality even a SSD appears performs slightly less well when doing
    less linear IO.  I was really hoping to get 400MB/s in my test but
    I only got 300MB/s.  Relative to a HD the SSDs performance loss for
    random IO is minor.  i.e. for a HD random IO can cut the IOPS by a
    factor of 10.  For a SSD random IO only seems to cut the IOPS by 25%
    (factor of 1.3x).

    The SSD vendors seem to want to make a big deal about the IOPS rate
    for small random IOs but I've never been able to even approached the
    maximum IOPS rate specced.  The reason is that the vendors are assuming
    stupidly low block sizes (like 512 bytes) for their IOPS numbers while
    no modern machine uses a block size less than 4K.  The swapcache code
    uses a blocksize in the 16-64K range.  So IOPS limitations are
    irrelevant.

    AHCI's NCQ seems to give the Intels an edge over the OCZ.  It adds at
    least another 50MB/sec to the SSDs performance.  I still can't believe
    the OCZ doesn't do NCQ, what a let-down.

					-Matt
					Matthew Dillon 
					<dillon@backplane.com>



[Date Prev][Date Next]  [Thread Prev][Thread Next]  [Date Index][Thread Index]