DragonFly BSD
DragonFly kernel List (threaded) for 2006-01
[Date Prev][Date Next]  [Thread Prev][Thread Next]  [Date Index][Thread Index]

Re: a take at cache coherency


From: Csaba Henk <csaba.henk@xxxxxxx>
Date: 25 Jan 2006 14:41:35 GMT

On 2006-01-24, Simon 'corecode' Schubert <corecode@xxxxxxxxxxxx> wrote:
> actually I forgot about the need to have direct shadow associations and
> thought about a replacement.  You are right, we can't do this.

Unless Matt tells us how to live without direct shadow associations...

> I'd still use a ncp_shadowroot to get O(1) locking.

While you deref the the shadowroot pointer to go for the group lock,
the group might get broken up, and the shadowroot info will be stale.

With going step by step you can avoid such pitfalls: if the owner of the
head lock (ie., the group lock) decides on to break up the group, then
you will stop at some intermediate node (where the cut has taken place),
but that's OK because that's then the new head, locked by you. So you
end up with a consistent state.

There are no such concerns with a flat or star topology, but I really
imagined node-to-node shadowing and not just one big hippie camp...

Hmm... it's true you can trade in cheap group modification for cheap
locking: if you obtain all "interlocks" (non-head node locks) 
before breaking up the group, you can keep the shadowroot data
consistent. I think it would be deadlock free because only the group
lock owner wants to have more than one interlock at a time.
Given that you lock more than edit groups, it might be a good deal.

>>  I doubt if ever
>> anyone would want to create an overlay of about hundred levels... even
>> ten would be pretty eccentric.
>
> not at all.  i'm thinking of packaging systems which use layered=20
> filesystems to show packages (or not), stuff like this...

Hm, yes I know of that idea, that's a valid point.

>> AFAICS the critical one is breaking down the parent/child relation.  =
> My
>> idea is that a p/c cut should imply breaking down shadower/shadowed
>> relatons. If we do that, the upper layer will see the lack of the link
>> into the lower one, and will recreate that as it's appropriate in the
>> new situation. The upper layer wouldn't be aware of the rename event =
> in
>> the lower one as such.
>
> I don't understand this parent/child thing.  could you please elaborate
> on this?  We don't have disconnected namecache entries, but maybe it's
> something else you are talking about.

See cache_rename(fncp, tncp).

{f,t}ncp get invalidated, sure they have to. If they are leaf nodes,
then it's the end of the story: when they get re-resolved they will get
properly the post-rename associations. The overlay will do the same,
as invalidations go up.

The funky case is when they have children. Children are migrated one by
one (each in two steps: unlink from old, link into new).

Let's go over my example in
http://leaf.dragonflybsd.org/mailarchive/kernel/2006-01/msg00105.html

Say we have d, dd, d/x#1 and dd/x#2 in the lower layer (both x-en are
called "x", think of #1, #2 as something like inode numbers, just to
tell apart the x-en).

Move dd to ddd, d to dd, scan through these to revalidate them.
You'll see x#1 relinked into dd, x#2 relinked into ddd.

In the overlay you'll keep seeing (the shadow of) x#2 in (the shadow
of) dd: of course, noone migrated (the shadow of) x#2 into (the shadow
of) ddd. What happened? The shadowing relation as such went stale: in the
overlay, the namecache entry named x in dd should now shadow the new
resident of dd in the lower layer, which is x#1.

Solution: upon tearing parent relations, also tear shadow associations.
Yet another kind of invalidation. The upper layer will see the lack of a
shadowed, and will recalculate it aptly.

>>> this connection).  imagine two processes doing this, each in=20
>>> different order:
>>>
>>> time	thread1			thread2
>>> 1	lock null		lock lower
>>> 2	resolve null (blocks)	lock null (blocks)
>>>
>>> we have a deadlock.  resolving null means having to lock lower, but
[...]
>> Regarding your example, thread2 won't perform a separate "lock null"
>> step.
>
> unless it just wants to lock two entries of which it doesn't know the
> shadow assocition.  maybe for renaming or whatnot.

OK. Sure, if null and lower are separate nodes, and thread1 locks null
first, then it wants to lock lower, moreover thread2 locks lower first,
then wants to lock null, that's a deadlock.

To add, you say thread1 will want to lock lower because that's needed to
resolve null. But you don't say anything concrete about thread2's
movement, you say "why not, it just wants to do that".

Now how much is it more concrete than saying "why not, it just wants to
do that" regarding both threads? Which is a completely generic deadlock
scenariao, and can be asked any time? If I ask about the threat of such
a generic scenario, what would say (there are other functions which will
want to lock two namecache nodes, like cache_rename...)?

Regards,
Csaba



[Date Prev][Date Next]  [Thread Prev][Thread Next]  [Date Index][Thread Index]