DragonFly BSD
DragonFly users List (threaded) for 2009-12
[Date Prev][Date Next]  [Thread Prev][Thread Next]  [Date Index][Thread Index]

panic: lockmgr: locking against myself


From: Goetz Isenmann <info@xxxxxxxxxxxxxxxxx>
Date: Fri, 4 Dec 2009 17:51:42 +0100

Hi!

Got this panic while running "darcs whatsnew" on a directory that was
rsynced to/from a linux machine. Got the same crash after reboot and
after hammer cleanup. Temporarily no crash on a local copy of this
directory and maybe also after a hammer version-upgrade from 2.0 to
2.3. I am not sure, but it looks like the same panic happened again
after further (local) changes (only). It is probably related to the
fact, that I get very different number of hard links between the
global patch cache in $HOME/.darcs/cache and multiple working
versions. In the affected directory I see link counts between 1 and 3.

#0  dumpsys () at ./machine/thread.h:83
#1  0xc031d801 in boot (howto=260) at /usr/src/sys/kern/kern_shutdown.c:375
#2  0xc031dac6 in panic (fmt=0xc05e63f0 "lockmgr: locking against myself") at /usr/src/sys/kern/kern_shutdown.c:802
#3  0xc03111f0 in lockmgr (lkp=0xe061cb20, flags=2) at /usr/src/sys/kern/kern_lock.c:348
#4  0xc0376341 in vn_lock (vp=0xe061ca68, flags=2) at /usr/src/sys/kern/vfs_vnops.c:1002
#5  0xc036e70c in vget (vp=0xe061ca68, flags=0) at /usr/src/sys/kern/vfs_lock.c:358
#6  0xc0493963 in hammer_get_vnode (ip=0xe7d09450, vpp=0xde2b79c0) at /usr/src/sys/vfs/hammer/hammer_inode.c:320
#7  0xc04a7b60 in hammer_vop_nresolve (ap=0xde2b7ad0) at /usr/src/sys/vfs/hammer/hammer_vnops.c:1126
#8  0xc0377e48 in vop_nresolve_ap (ap=0xde2b7ad0) at /usr/src/sys/kern/vfs_vopops.c:1618
#9  0xddaea05e in ?? ()
#10 0xc0377b72 in vop_nresolve (ops=0xd9601bf0, nch=0xde2b7b10, dvp=0xe13a8068, cred=0xc3e27a08) at /usr/src/sys/kern/vfs_vopops.c:951
#11 0xc036234b in cache_resolve (nch=0xde2b7b4c, cred=0xc3e27a08) at /usr/src/sys/kern/vfs_cache.c:2135
#12 0xc036aa39 in nlookup (nd=0xde2b7c48) at /usr/src/sys/kern/vfs_nlookup.c:499
#13 0xc03736e4 in kern_link (nd=0xde2b7c80, linknd=0xde2b7c48) at /usr/src/sys/kern/vfs_syscalls.c:2085
#14 0xc037383f in sys_link (uap=0xde2b7cf0) at /usr/src/sys/kern/vfs_syscalls.c:2121
#15 0xc053cc09 in syscall2 (frame=0xde2b7d40) at /usr/src/sys/platform/pc32/i386/trap.c:1339
#16 0xc0526f06 in Xint0x80_syscall () at /usr/src/sys/platform/pc32/i386/exception.s:876
#17 0x28782c03 in ?? ()
Backtrace stopped: previous frame inner to this frame (corrupt stack?)

I am running i386 DragonFly v2.4.1.28.gf85b4 on an AMD Athlon(tm) 64
X2 Dual Core Processor 4400+

I have now a similar situation inside a vkernel, but I am still not
sure, what it really needs to get this panic. Looking at the vkernel
panic (sorry I know nearly nothing about gdb), both arguments of
sys_link contain the same filename in different directories. A "ls
-li" shows a link count of 3 and the same inode number. The third
reference is in a third directory and has also the same name.

Is there a possibility to get a system call trace of the darcs process
before the panic? Using ktrace I see no trace file after reboot.

I am also currently not able to save a crash dump for the vkernel:
Checking for core dump...
savecore: read: Invalid argument
Need to recheck my setup.

But maybe this information already triggers an idea... and I do not
need to further dig into bits I do not understand.



[Date Prev][Date Next]  [Thread Prev][Thread Next]  [Date Index][Thread Index]