DragonFly kernel List (threaded) for 2004-04
[
Date Prev][
Date Next]
[
Thread Prev][
Thread Next]
[
Date Index][
Thread Index]
re: serializing token
:I am looking at the serializing token work some more
:and I have a few questions. I hope it is not a bother
:for y'all to answer.
:
:As I understand it, multiple threads can hold tokens
:but only one of the threads holding a particular token
:can be running at any time. As I looked at the lwkt
:token implementation, I believe obtaining a token
:reference is a blocking point, is this correct? What I
Right.
:mean is if thread A is running and holding token T,
:and thread B attempts to grab a reference to token T,
:thread B blocks (and the scheduler gets invoked to run
:another thread) since A is already running. Is this
:correct?
Right.
:If a call to get a token reference is a blocking
:point, then what happens when you need to hold more
:than one token? Specifically, I am concerned with this
:case: Token T1 protects list L1 and token T2 protects
:list L2. If the code looks like this:
The LWKT scheduler will not schedule a thread until the scheduler
can acquire all tokens currently held by that thread. So if the
second gettoken blocks, the first token can be lost but the call
to lwkt_gettoken(T2) will not return until both tokens can be
acquired again by the scheduler.
:lwkt_gettoken(T1);
:foo = L1->head;
:...
:...
:lwkt_gettoken(T2);
:bar = L2->head;
:...
:...
:dosomething(foo);
:
:Is this valid/correct code? Is foo still valid after
:the second lwkt_gettoken call? I ask this because the
:second call to lwkt_gettoken(T2) can be a blocking
:point, so some other thread can get scheduled at that
:point. If another thread that is holding T1 runs, then
No, foo will NOT still be valid if it depends on T1. You would
have to use this sequence to guarentee foo, or you would have to
perform some sort of recheck after gaining T2 before using foo:
lwkt_gettoken(T1);
lwkt_gettoken(T2);
foo = L1->head;
bar = L2->head;
:If my understanding is correct, then is the code for
:sysctl_vnode() in src/sys/kern/vfs_subr.c incorrect?
:In this function (which I see is #if 0'd out), it
:takes the token for mountlist_token, which I presume
:protects the mountlist. Then it takes the token for
:mnbtvnode_token. However, the head of the mountlist,
:obtained before it takes the second token, is used
:after the second token acquisition. So, this code is
:wrong since the head of the mountlist could have
:changed, right???
Correct. Scanning the vnode list is not safe if the code within
the for() loop does anything that might block, and obtaining another
token counts as potentially blocking.
In fact, as an example of how tokens are used properly to scan the
mount vnode list, take a look at vmntvnodescan() line 1806
or so in kern/vfs_subr.c
:Apologies in advance if this seems like a stupid
:question.
:
:-J
Not at all, your questions show a very good understanding of the
token code.
Basically you have hit upon a fundamental difference between tokens
and mutexes. While a mutex acts sort of like a lock, a token
acts nothing like a mutex or a lock. Whereas getting multiple mutexes
guarentees the atomicy of earlier acquired mutexes, getting multiple
tokens does not. This also means that mutexes have deadlock issues
while tokens cannot deadlock.
In fact, the fact that tokens do not deadlock coupled with the fact
that there is no expectation of atomicy for earlier acquired tokens
when later operations block leads to a great deal of code simplification.
If you look at FreeBSD-5, you will notice that FreeBSD-5 passes held
mutexes down the subroutine stack quite often, in order to allow some
very deep procedural level to temporarily release a mutex in order to
switch or block or deal with a deadlock. There is a great deal of
code pollution in FreeBSD-5 because of this (where some procedures
must be given knowledge of the mutexes held by other unrelated procedures
in order to function properly).
You don't have any of that mess with the token abstration but there is a
cost and that cost is that you lose atomicy across blocking ops.
Another way to think of it is to compare the token abstraction with the
SPL mechanism. Tokens and SPLs work almost identically, except a token
works across cpus while SPLs only work within a single cpu's domain.
For example, if you are holding an SPL and you tsleep(), the system
'loses' the SPL until your thread resumes from the tsleep(). The
same thing happens with tokens you hold through a tsleep().
All a token guarentees is that all the tokens you are holding will be
acquired while your thread is running. If your thread blocks or switches
away for any reason (with one exception which I will describe below),
the tokens you are holding may be lost for the duration. The scheduler
will not schedule your thread again until it is able to reacquire all
the tokens you are holding.
The one exception to this rule occurs in how DragonFly handles interrupt
preemption. Since interrupts are in fact their own threads interrupt
preemption in fact switches to the interrupt thread, then switches back
to the original thread. However, in the preemption case the tokens held
by the original thread are left acquired and if the interrupt thread
blocks for any reason the system switches back to the original thread,
leaving the token abstraction intact from the point of view of the
original thread regardless of how many interrupt preemptions might occur.
-Matt
Matthew Dillon
<dillon@xxxxxxxxxxxxx>
[
Date Prev][
Date Next]
[
Thread Prev][
Thread Next]
[
Date Index][
Thread Index]