DragonFly commits List (threaded) for 2005-01
[
Date Prev][
Date Next]
[
Thread Prev][
Thread Next]
[
Date Index][
Thread Index]
MP Synchronization mechanisms (was Re: cvs commit: src/sys/net if.c...)
:On Wednesday, 19. January 2005 18:30, Matthew Dillon wrote:
:> The two solutions are to either ref-count the network interface on a
:> per-packet basis or to synchronize against consumers of the packet.
:> ref-counting is very expensive in an MP system so we have chosen to
:> synchronize against consumers by sending a NOP message to all protocol
:> processing threads and waiting for it to be replied. This only occurs
:> when an interface is being brought down and is not expected to introduce
:> any performance issues.
:
:this is really great! i mean, the simplicity is overwhelming... this messag=
:ing=20
:framework really pays off :)
:
:cheers
: simon
Yup. Little advantages here and there that I hadn't thought up of
originally keep coming out of the woodwork.
This is 'almost' RCUish. I've been thinking about RCU as well (Jeff
and I have talked about it on and off for a year+). The thing I don't
like about RCU is that it seems overly complex for what it is supposed
to accomplish.
It occurs to me that if all we really need is a general, passive
synchronization mechanism between cpus then all we really need to do is
have a kernel thread which wakes up every so often (like 5 times a second
or 10 times a second), and then moves itself to each cpu and runs the
queue. Something like this:
void
passive_synchronizer_thread(void)
{
static struct passive_work_node marker[MAXCPUS];
struct passive_work_node *pn;
int i;
for (;;) {
for (i = 0; i < ncpus; ++i) {
lwkt_setcpu_self(globaldata_find(i));
TAILQ_INSERT_TAIL(&mycpu->gd_passive_work_queue,
&marker[i], pn_entry);
}
for (i = 0; i < ncpus; ++i) {
lwkt_setcpu_self(globaldata_find(i));
crit_enter();
for (;;) {
pn = TAILQ_FIRST(&mycpu->gd_passive_work_queue);
if (pn == &marker[i])
break;
TAILQ_REMOVE(&mycpu->gd_passive_work_queue, pn, pn_entry);
TAILQ_INSERT_TAIL(&mycpu->gd_passive_work_free, pn,
pn_entry);
crit_exit();
pn->pn_callback(pn);
crit_enter();
}
TAILQ_REMOVE(&mycpu->gd_passive_work_queue, &marker[i],
pn_entry);
crit_exit();
}
}
tsleep( ... , hz / 5);
}
I'm not going to do it until I see a clear need, though.
-Matt
Matthew Dillon
<dillon@xxxxxxxxxxxxx>
[
Date Prev][
Date Next]
[
Thread Prev][
Thread Next]
[
Date Index][
Thread Index]