DragonFly BSD
DragonFly kernel List (threaded) for 2009-12
[Date Prev][Date Next]  [Thread Prev][Thread Next]  [Date Index][Thread Index]

Re: kernel leaking memory somewhere


From: Matthew Dillon <dillon@xxxxxxxxxxxxxxxxxxxx>
Date: Wed, 16 Dec 2009 21:12:27 -0800 (PST)

:How hard would it be to change those settings at runtime, especially how
:much
:memory is available for disk caching and total available memory space?
:In virtualized environments (e.g. kvm or vkernel), it could make great sense
:
:to "return" memory pages to the host. Think about vkernel's that don't need
:a fixed-sized memory image, which then use most of the space for
:disk-caching,
:which probably makes no sense altogether in a vkernel (as it would
:double-cache
:those blocks, once in the host and another time in vkernel).
:
:Just my thoughts as I recently stumbled across the virtio-baloon driver :)
:
:Regards,
:
:  Michael

    A kvm or vkernel generally is going to be cycling all of its
    memory regardless.  Even more importantly the vkernel cannot
    operate efficiently unless it maintains its own cache.  It
    also has to deal with copy-on-write and other mapping modes
    which are not necessarily going to be condusive to sharing
    the host OS's file cache.

    There will be duplication but at the same time it would be
    very difficult for the vkernel to manage its resources if
    it were to attempt to avoid it.  The vkernel (or a kvm) has to
    treat its 'memory' as being immediately accessible, and it
    needs to know precisely what data is instantly accessible
    (i.e. in its cache) verses data which might require I/O
    (whether in the host cache or not).

    This is because the vkernel cannot really afford for any of its
    threads dedicated to emulating 'cpus' to randomly stall due
    to e.g. having to page something in from the host.  The vkernel
    shoves operations which it knows might stall into other
    special threads precisely so its cpu threads never stall.

    Actually, this is a difference I noticed when running qemu vs
    a vkernel.  When qemu's disk emulator needs to read from backing
    store it seems to have serious trouble avoiding stalls in its
    main emulation thread.  Supposedly qemu is using aio but if so
    it isn't working as it should.  The vkernel is able to deal
    with I/O much more smoothly.

 						-Matt




[Date Prev][Date Next]  [Thread Prev][Thread Next]  [Date Index][Thread Index]