DragonFly BSD
DragonFly users List (threaded) for 2005-02
[Date Prev][Date Next]  [Thread Prev][Thread Next]  [Date Index][Thread Index]

Re: em driver - issue #2


From: Matthew Dillon <dillon@xxxxxxxxxxxxxxxxxxxx>
Date: Sun, 6 Feb 2005 13:30:10 -0800 (PST)

:
:It would be good if a user could make the decision that mbuf allocations
:should be a priority. For a normal system, the current settings are likely
:adequate; while a network appliance would require a different set of 
:priorities. What we do is test the absolute capacity of the box under
:best conditions to forward packets, so it would be rather easy to set
:a threshold. 

    A router has a more definable set of conditions then a general purpose
    box.  I agree that we need some sort of threshold but it would have
    to be fairly generous to deal with the varying usage patterns out 
    there.  e.g. we would want to allow at least a 1000 packet queue
    backlog between the IF interrupt and the protocol stacks.  Packet
    processing actually becomes more efficient as the queues get longer
    so the only real consideration here is how much latency one wishes to
    tolerate.  Latency is going to be a function of available cpu in the
    backlog case.

:Also, in FreeBSD I tune the kernel to use about 120M of RAM no matter
:how much is in the system as thats about the most that is needed. 
:I assume that the kern.vm.kmem.size works the same in dfly? Is the 
:default formula still 1/3 of available memory allocated to the kernel? I
:admit that I haven't done this on my dfly test system, so I'll have to try
:it and see what difference it makes. My test system only has 256M
:in it and now that I think of it, without tuning that may only allocate
:80M to the kernel which could contribute to the problem. Although,
:it would seem to me that if a user overrides the default mbuf clusters
:in the kernel config that those clusters should be preallocated, as
:the entire point of changing the setting is that you really NEED that many, 
:and you want to avoid running out at all costs. Isn't the point of making
:the mbuf clusters requirement a kernel option that they need to be 
:preallocated and the kernel needs to know how many in advance?

    There is no kmem_map any more in DragonFly.  All kernel memory
    is taken out of a single pool based on kernel_map.   This gives us 
    a lot more flexibility because we can only run out of space in one
    larger map rather then run out of space separately in two smaller maps.

    Run vmstat -m on your system with the system in a mbuf saturated state
    (I think this is the third time I've said that!)

    There is no aggregate limit to kernel allocated memory other then the
    KVM available.  This is 1GB by default.  There are individual malloc
    pool limits (as shown by vmstat -m) and these are typically scaled 
    based on available physical memory.  No single malloc pool is allowed 
    to eat more then 1/10 of the system's physical memory by default.  Under
    the heaviest general purpose loads only two or three malloc pools should
    ever get near their limits so we are talking about approximately 40% of
    available system ram.  

    malloc is backed by the slab allocator and the slab allocator is
    backed by kernel_map.

    This effectively replaces the KVM reservation for mbufs that we
    had before.  FreeBSD has also moved their mbuf model to a slab-backed
    allocation, but they haven't removed kmem_map yet and they are using
    a somewhat less flexible unaggregated design (but it almost doesn't
    matter for something as resource-intensive as the mbuf subsystem).

    In anycase, the malloc pool limits are simple programmed values and
    can theoretically be modified on the fly, but we don't have a sysctl
    API to do that yet.

					-Matt
					Matthew Dillon 
					<dillon@xxxxxxxxxxxxx>



[Date Prev][Date Next]  [Thread Prev][Thread Next]  [Date Index][Thread Index]