DragonFly BSD
DragonFly kernel List (threaded) for 2005-02
[Date Prev][Date Next]  [Thread Prev][Thread Next]  [Date Index][Thread Index]

phk malloc, was (Re: ptmalloc2)


From: Jonathan Dama <eng@xxxxxxxxxxxxxxxxxxxxxx>
Date: Thu, 17 Feb 2005 18:31:05 -0800

ptmalloc is, I believe, the basis for glibc malloc.

glibc malloc has at least one excellent feature that
phkmalloc lacks, that is that it services some memory
requests with mmap instead of merely using brk.

This is important because shared libraries/possible use 
of mmap constrains the growth of the datasegment.  MAXDSIZ 
gives some control over the mmap/ds trade-off but it does so 
at the system level--and brk is a very stupid system call.

Ideally, you ought to be able to fill malloc's backing keg
with allocations from mmap permitting MAXDSIZ to be left
alone.  (phk reports that mmap is x5 slower than brk--see:
http://lists.swelltech.com/pipermail/squid-users-archive/1996-August/000666.html
http://lists.swelltech.com/pipermail/squid-users-archive/1996-August/000663.html
--but this could be made up by simply going to the kernel
less frequently--see further:)

phkmalloc is very good at using madvise to mark pages
for recycling but it completely fails to make use of the
overcommit nature of the VM system.  phk's original design
is based on the assumption that conserving ram/swap meant
requesting the fewest possible pages from the kernel via
brk--rather than the looser constraint, writing the fewest
number of pages.

As a consequence, it makes far too many system calls to
allocate more memory.  It even keeps it's own page directory 
very small and extends it by mmaping a whole new region for it's
internal page directory and then executing a memcopy.  I
have vague suspicion that it does this everytime it wants to
extend the datasegment... even though the page directory
only needs to expand once every 256 pages or so...

Moreover, it may even brk for every additional 4K of
allocation via malloc.

(! I haven't carefully examined the code to be sure though !)

Correctly me if I'm wrong, but because of overcommit there
is no direct reason a program shouldn't allocate the entire
address-space from the kernel (unless it intends to mmap).
Obviously to play nice with other uses of mmap, this isn't
so desirable, but it does suggest there is relatively little
benefit to being too conservative.

I suggested to Poul recently that I'd investigate this and
do some benchmarks but I haven't had time to do so yet.

Incidentally, it would be nice if there was a form of mlock
which ensured that a given set of pages in the process address 
space was ensured either physical or swap backing.  is there
such a thing already?

-Jon



[Date Prev][Date Next]  [Thread Prev][Thread Next]  [Date Index][Thread Index]