DragonFly kernel List (threaded) for 2005-02
Re: phk malloc, was (Re: ptmalloc2)
Content-Type: text/plain; charset=us-ascii; format=flowed
X-Trace: 1109445835 crater_reader.dragonflybsd.org 718 126.96.36.199
Xref: crater_reader.dragonflybsd.org dragonfly.kernel:7848
Matthew Dillon wrote:
> You just can't equate the two situations. Trying to compare a system
> with overcommit turned on (either physical or swap-backed) with a system
> with overcommit turned off is like comparing apples with oranges.
> Ignoring the fact that it is past ridiculous to worry about running a
> modern system out of swap when you have a 160GB hard drive sitting
> there, even the physical memory argument falls on its face because,
> quite simply, you cannot have it both ways.
> You can't run a system at 100% capacity with overcommit turned off,
> it just won't work. You might be able to run it at 25% capacity
> 100% of the time with overcommit turned off. You could run it at
> 50% capacity and have occassional memory denials and then pray that
> the mostly untested code paths to deal with those denials in the
> software work properly. Similarly, you can't run a system at 100%
> capacity 100% of the time with overcommit turned on, there will be
> load spikes that will drop the efficiency so you will get something
> like 100% capacity 99.9% of the time and 80% capacity 0.1% of the time.
> But, honestly, if someone came up to me and told me that they were
> going to run the system at 25% capacity in order to avoid the
> occassional load spike dropping efficiency down, I would fire them on
> the spot. And that is crux of the problem here... it is past ridiculous
> not to make full use of a system's capacity if you have need of that
> capacity. Past ridiculous, which is why the whole argument is totally
> and completely bogus and has been bogus for many years now. One wonders
> where people get these ideas, it's so nutty. I mean, give me a break,
> given the choice between the OS randomly returning a memory allocation
> failure and a program self-regulating itself to a particular footprint
> size, it's obvious that the only reliable solution is for the program
> to self regulate itself, because that is a far more controllable
> environment then the OS returning random memory allocation failures.
> It's so obvious that it amazes me people still try to make these
> ridiculous pie-in-the-sky-software-written-perfectly arguments to
> justify turning off overcommit to fix a one in a million year chance
> of a properly configured system running out of swap. Past ridiculous.
Ok. The problem is that proposed solutions for systems which allow over
commit will lead to even lower machine utilization because they are:
a) manual, which can take a long time to do properly (and is probably
error prone) and
b) static, in that they cannot adjust to changing workloads.
If I am not mistaken, proposed solutions for systems which allow over
commit include buying more hardware than you could possible need (disks
and memory): would you fire people providing that kind of advice? Would
you fire yourself for configuring a system with way more swap+memory
than it needs? The argument you are making sounds flawed and leads to
resources that are never used. . .just as in the dreaded no over commit
case. At least in systems which don't allow over commit, it is possible
for the system to make better and more dynamic use of resources for the
And the point we keep coming back to is that it is impossible for an
application to accurately self regulate its resource usage (unless you
mean allowing command line flags to specify how much memory to use [why
not just set rlimits instead]) since it does not receive accurate
feedback from the kernel when over commit is allowed.