DragonFly BSD
DragonFly kernel List (threaded) for 2011-06
[Date Prev][Date Next]  [Thread Prev][Thread Next]  [Date Index][Thread Index]

[gsoc] Virtio Block device benchmarks


From: Stéphanie Ouillon <stephanie@xxxxxxxxx>
Date: Sat, 04 Jun 2011 00:45:32 +0200

Hello,

I intended to test the performances of the virtio block device driver (the one which is on gitorious https://gitorious.org/virtio-drivers/virtio-drivers/blobs/master/blk/virtio-blk.c).

Virtualization environnement:

host: QEMU emulator version 0.14.0 (qemu-kvm-devel), on Proxmox VE 1.8
guest: DragonFly BSD i386 v2.10.1, RAM 1024

Filesystem: Hammer
emulated IDE hard disk - 20GB
virtio hard disk -10GB
(format of the image : qcow2)


To mount the virtio disk, I did:


    # newfs_hammer -f -L disk-virtio /dev/vdb0s0
    # mount_hammer /dev/vdb0s0 my-mount-point


I made some tests


- by running dd commands like :
# dd if=/dev/zero of=/root/my-mount-point/file1 bs=4k count=10
# dd if=/dev/zero of=/root/file2 bs=4k count=10
and changing the value of the bs and the count fields.

- via bonnie++ tool

The first point is that an average would have to be calculated so that the results have a real meaning because they fluctuate "a lot" from a test to another with the same arguments. Sometimes it is faster on the virtio disk, sometimes it is not.
What could be a convincing way to benchmark it ? Tim Bisson ran dd parallel threads, is it the only way to ?



But the main point is that the VM often crashes during the test and I have to reboot (I can't connect anymore via SSH).
It sometimes crashes during a dd test.
It always crashes during bonnie++ tests. The first test performed by this tool is to write repetitively in a file with the standard stdio macro putc().


I run the command :
bonnie++ -u root -d /root/my-mount-point/ -r 1024
where -r specifies the size of RAM

The test is lauched at 22:21:38. This is what appears in /var/log/messages :

Jun 3 22:17:58 kernel: HAMMER(disk-virtio) recovery range 3000000000018368-3000000000024830
Jun 3 22:17:58 kernel: HAMMER(disk-virtio) recovery nexto 300000000001ffe8 endseqno=000fc247
Jun 3 22:17:58 kernel: vringdescnext
Jun 3 22:17:58 last message repeated 8 times
Jun 3 22:17:58 kernel: HAMMER(disk-virtio) recovery undo 3000000000018368-3000000000024830 (50376 bytes)(RW)
Jun 3 22:17:58 kernel: vringdescnext
Jun 3 22:17:58 last message repeated 86 times
Jun 3 22:17:58 kernel: HAMMER(disk-virtio) recovery complete
Jun 3 22:17:58 kernel: vringdescnext
Jun 3 22:18:17 last message repeated 14 times
Jun 3 22:21:38 kernel: vringdescnext
Jun 3 22:21:38 last message repeated 9 times
Jun 3 22:21:38 kernel: here
Jun 3 22:21:38 kernel: vringdescnext
Jun 3 22:21:38 last message repeated 104 times
Jun 3 22:21:38 kernel: Bad enqueue_reserve
Jun 3 22:21:38 kernel: vringdescnext
Jun 3 22:21:38 last message repeated 279 times
Jun 3 22:21:38 kernel: here
Jun 3 22:21:38 kernel: Bad enqueue_reserve
Jun 3 22:21:38 kernel: vringdescnext
Jun 3 22:21:39 last message repeated 116 times
Jun 3 22:21:39 kernel: here
Jun 3 22:21:39 kernel: Bad enqueue_reserve
Jun 3 22:21:39 kernel: vringdescnext
Jun 3 22:21:39 last message repeated 100 times
Jun 3 22:21:39 kernel: here
Jun 3 22:21:39 kernel: Bad enqueue_reserve
Jun 3 22:21:39 kernel: vringdescnext
Jun 3 22:21:39 last message repeated 89 times
Jun 3 22:21:39 kernel: here
Jun 3 22:21:39 kernel: Bad enqueue_reserve
Jun 3 22:21:39 kernel: vringdescnext



I just quickly looked through the code to see what is the problem :


This error message appears in virtio-blk.c, in the virtio_blk_execute function, at ligne 235.
virtio_enqueue_reserve return a bad value ( <>0 ) at ligne 535 of virtio.c if (qe==NULL), that is to say, that there is no free place anymore in the virtqueue.


I'll look at it more precisely later during the week-end, but if you have some ideas about it...


Stéphanie









[Date Prev][Date Next]  [Thread Prev][Thread Next]  [Date Index][Thread Index]