:Matthew Dillon wrote:
:> Well, keep in mind, that in 2-3 years time there won't *BE* any
:> single-cpu computers any more...
:
:I'm still confused about the difference (if any) between a motherboard
:with two separate CPU's and a new mobo with a dual-core CPU.
:
:>From your point of view as a kernel programmer, is there any difference?
Basically, single-processor motherboards of yesterday still look
the same as today, but that physical cpu chip you plug into the board
can actually contain two cpu's instead of one if you are using a
multi-core cpu (and a multi-core capable MB, of course). So you get
the same computing power out of it as a dual-socket motherboard
would have given you.
It gets even better for SMP boards. The two-socket opteron MB of
yesterday could accomodate two single-core cpu's giving you 2xCPU
worth of computing power. A two-socket opteron MB of today
can accomodate two multi-core cpu's giving you 4 cpu's and about
3x the performance. A 4-socket box gives you 6x the performance.
It IS true that the initial dual-core parts run at a slightly slower
clock rate. But this isn't because they are dual-core, it is simply
because both AMD and Intel know that they had gone over the heat
dissipation limits in their attempting to max-out the clock frequency
of their single-core CPUs and going to dual-core gave them just the
excuse they needed to back down to more reasonable dissipative levels.
The result is that cooling requirements for dual-core cpu's, not to
mention case and power supply requirements, are far lower. Lower
requirements == costs less money to maintain, and in a server room ==
costs less money in electricity and costs less money in cooling.
People don't care about single-core performance any more these days,
because most computing jobs aren't single threaded. Even a mail server
isn't single threaded... it forks off a process for each connection.
As these cpu's approach price parity, dual-core makes more and more
sense *EVEN* if you don't actually need the extra computing power,
simply because the clock frequency reduction results in far, far less
power use.
What matters now is computing performance per watt of electricity used.
That isn't to say that people are dropping AMD and Intel and going to
ARM... there are still minimum performance requirements simply due to
the cost of all the other hardware that makes up a computer. But it
does mean that people would rather have two 2.0 GHz cpus which together
use LESS power then a single 2.6 GHz cpu.
So from my point of view, we win both ways. Not only are there plenty
of applications that can use the newly available computing power, even
if it is just in 'burst' usage, but you get that new power at lower
cost. Even without price parity on the cpus (and as I have said,
price parity is rapidly approaching anyhow), it is STILL worth it simply
due to savings in electricity costs. I spend over $2500/year JUST to
power my machine room. That's down from the $3500/year I was spending
two years ago when I had 1/4 of the computing power shoved into that room
as I have today. This means that I really don't give a damn whether
a dual-core cpu costs a bit more money or not. The extra few hundred
I spend on it today is easily made up in the power savings I get over
the course of a single year.
The reason? More aggregate computing power eating less electricity.
If you think about it, this equation... 'more aggregate computing power
eating less electricity' is PRECISELY what multi-core gives us. As
Martin mentioned, using virtualization to concentrate computing power
results in a huge savings. Multi-core allows you to concentrate the
computing power, eating less electricity in the process, even more.