DragonFly BSD
DragonFly kernel List (threaded) for 2011-09
[Date Prev][Date Next]  [Thread Prev][Thread Next]  [Date Index][Thread Index]

New Blogbench benchmarks with 2.11


From: Francois Tigeot <ftigeot@xxxxxxxxxxxx>
Date: Sun, 18 Sep 2011 23:25:50 +0200

I've run a new serie of blogbench benchmarks with the same parameters I used
in July; you'll find the results in the attached files.

The machine used was a bit less powerful than the last one, which means the
results can not be directly compared.

I tested DragonFly 2.10 and the latest 2.11 with two different Areca
RAID controllers.
Since OpenIndiana has just released a new version, I also tested it on one
of them. ZFS was used with pass-through disks; I don't think the difference
of processing power between the two Areca cards would have changed the
results by more than a few percents.

The results are very encouraging: DragonFly 2.11 doesn't suffer from write
starvation with heavy disk loads like 2.10 did.


Complete setup used:
--------------------

  - Supermicro X8STE mainboard
  - Intel Xeon E5606 (2.13 GHz, 8MB cache, triple-channel DDR3)
  - 12 GB RAM
    hw.physmem="4GB" for the DragonFly tests
    only one 4GB DIMM for the OpenIndiana tests
  
  - Areca ARC-1212 (ARM IOP348 800MHz, 256MB DDR2-533)
  or
  - Areca ARC-1880-i (PPC 440 800 MHz, 512MB DDR2-800)
  
  - 4x 500GB WD5003ABYX hdd (SATA 3Gb/s, 7200 RPM, 64MB cache)


Commands used:
--------------

  newfs_hammer -L ARECA /dev/da0
  mount -t hammer /dev/da0 /mnt
  mkdir /mnt/blogbench
  blogbench -d /mnt/blogbench --iterations=150

or

  zpool create tank c4t0d0 c4t0d1 c4t0d2 c4t0d3
    (with more or less disks for the different runs)
  mkdir /tank/blogbench
  blogbench -d /tank/blogbench --iterations=150


DragonFly 2.10 remarks
----------------------

Many messages like these appeared on the console:

[diagnostic] cache_lock: blocked on 0xffffffe06297abc8 "article-1.xml"
[diagnostic] cache_lock: blocked on 0xffffffe06297abc8 "article-1.xml"
[diagnostic] cache_lock: blocked on 0xffffffe06297abc8 "article-1.xml"
[diagnostic] cache_lock: unblocked article-1.xml after 512 secs
[diagnostic] cache_lock: unblocked article-1.xml after 513 secs
[diagnostic] cache_lock: unblocked article-1.xml after 327 secs
[diagnostic] cache_lock: blocked on 0xffffffe061f688f8 "article-40.xml"
[diagnostic] cache_lock: blocked on 0xffffffe061f688f8 "article-40.xml"
[diagnostic] cache_lock: unblocked article-40.xml after 696 secs
[diagnostic] cache_lock: unblocked article-40.xml after 710 secs

Almost all operations were reads after some time. Almost no writes were
visible in systat


DragonFly 2.11 remarks
----------------------

Some cache_lock messages were visible on the console, but with much shorter
durations:

[diagnostic] cache_lock: blocked on 0xffffffe05b675d08 "article-72.xml"
[diagnostic] cache_lock: blocked on 0xffffffe061e4d338 "article-31.xml"
[diagnostic] cache_lock: unblocked article-31.xml after 3 secs
[diagnostic] cache_lock: unblocked article-72.xml after 6 secs

There were also a few ones like these:

EXDEV case 1 0xffffffe076476848 
EXDEV case 1 0xffffffe065244848 

Neither reads nor writes were starved and both were constantly visible in
systat
The R/W mix was still a bit too unbalanced sometimes IMHO:

Disks   da1   da0   cd0 pass1 pass2 pass3   sg1      8185 pdpgs
KB/t        16.57                                         intrn
tpr/s         371                                  373056 buf
MBr/s        7.62                                   54432 dirtybuf
tpw/s        2511                                  277863 desiredvnodes
MBw/s       38.99                                  277863 numvnodes
% busy    0   100     0     0     0     0     0    226295 freevnodes

(less than 13% of operations are reads in this case)


OpenIndiana remarks
-------------------

Since the preferred method of setting up a multi-disk volume is to use
individual disks with zfs, the raid features of the controller were not
used and individual disks exported.

As with DragonFly 2.10, writes were starved; this was nicely visible with
the blogbench output and iostat -D

zpool iostat makes for a better show:

user@openindiana:~# zpool iostat tank 2
               capacity     operations    bandwidth
       pool        alloc   free   read  write   read  write
       ----------  -----  -----  -----  -----  -----  -----
       tank        4.33G  1.81T  1.68K    196  26.0M  11.2M
       tank        4.33G  1.81T  2.23K     27  37.3M  2.24M
       tank        4.33G  1.81T  2.13K     60  34.4M  2.94M
       tank        4.33G  1.81T  2.02K    131  32.2M  6.18M
       tank        4.33G  1.81T  2.01K    102  32.6M  2.28M
       tank        4.33G  1.81T  2.28K      4  35.9M   561K

(that's 0.0017% writes in the worst case)

-- 
Francois Tigeot

Attachment: Blogbench_dfly211.pdf
Description: Adobe PDF document



[Date Prev][Date Next]  [Thread Prev][Thread Next]  [Date Index][Thread Index]