Opened 16 years ago

Closed 16 years ago

Last modified 16 years ago

#2577 closed bug (fixed)

Panic: _mutex_lock(): double lock

Reported by: andreasf Owned by: bonefish
Priority: normal Milestone: R1
Component: System/Kernel Version: R1/pre-alpha1
Keywords: Cc:
Blocked By: Blocking:
Platform: x86

Description

The following panic was reproducible by executing a Hello World with mono (with shared handles not deactivated) with SMP disabled in the boot menu.

runtime_loader: cannot open file /ToBe/Mono/mcs/class/lib/monolite/mscorlib.dll.so
xsi_semget: key = 1292045451, numberOfSemaphores = 8, flags = 384
xsi_semop: semaphoreID = 1, ops = 0x7ffeebd4, numOps = 1
xsi_semop: semaphoreNumber = 6, value = 1
xsi_semop: semaphore acquired succesfully
XsiSemaphore::RecordUndo: new record added. Team = 49333, semaphoreSetID = 1, semaphoreNumber = 6, value = -1
xsi_semop: semaphoreID = 1, ops = 0x7ffeebd4, numOps = 1
xsi_semop: semaphoreNumber = 7, value = 0
xsi_semop: semaphore acquired succesfully
XsiSemaphore::RecordUndo: new record added. Team = 49333, semaphoreSetID = 1, semaphoreNumber = 7, value = 1
xsi_semop: semaphoreID = 1, ops = 0x7ffeebc4, numOps = 1
xsi_semop: semaphoreNumber = 6, value = 0
xsi_semop: semaphore acquired succesfully
XsiSemaphore::RecordUndo: found record. Team = 49333, semaphoreSetID = 1, semaphoreNumber = 6, value = -1
xsi_semop: semaphoreID = 1, ops = 0x7ffee844, numOps = 1
xsi_semop: semaphoreNumber = 3, value = 1
xsi_semop: semaphore acquired succesfully
XsiSemaphore::RecordUndo: new record added. Team = 49333, semaphoreSetID = 1, semaphoreNumber = 3, value = -1
xsi_semop: semaphoreID = 1, ops = 0x7ffee844, numOps = 1
xsi_semop: semaphoreNumber = 3, value = 0
xsi_semop: semaphore acquired succesfully
XsiSemaphore::RecordUndo: found record. Team = 49333, semaphoreSetID = 1, semaphoreNumber = 3, value = -1
runtime_loader: cannot open file /ToBe/Mono/hello.exe.so
runtime_loader: cannot open file /ToBe/Mono/hello.exe.so
runtime_loader: cannot open file /ToBe/Mono/mcs/class/lib/monolite/mscorlib.dll.so
xsi_semop: semaphoreID = 1, ops = 0x7ffee800, numOps = 1
xsi_semop: semaphoreNumber = 2, value = 1
xsi_semop: semaphore acquired succesfully
XsiSemaphore::RecordUndo: new record added. Team = 49333, semaphoreSetID = 1, semaphoreNumber = 2, value = -1
xsi_semop: semaphoreID = 1, ops = 0x7ffee7f0, numOps = 1
xsi_semop: semaphoreNumber = 2, value = 0
xsi_semop: semaphore acquired succesfully
XsiSemaphore::RecordUndo: found record. Team = 49333, semaphoreSetID = 1, semaphoreNumber = 2, value = -1
xsi_semop: semaphoreID = 1, ops = 0x7ffee800, numOps = 1
xsi_semop: semaphoreNumber = 2, value = 1
xsi_semop: semaphore acquired succesfully
XsiSemaphore::RecordUndo: found record. Team = 49333, semaphoreSetID = 1, semaphoreNumber = 2, value = 0
xsi_semop: semaphoreID = 1, ops = 0x7ffee7f0, numOps = 1
xsi_semop: semaphoreNumber = 2, value = 0
xsi_semop: semaphore acquired succesfully
XsiSemaphore::RecordUndo: found record. Team = 49333, semaphoreSetID = 1, semaphoreNumber = 2, value = -1
xsi_semop: semaphoreID = 1, ops = 0x7ffee800, numOps = 1
xsi_semop: semaphoreNumber = 2, value = 1
xsi_semop: semaphore acquired succesfully
XsiSemaphore::RecordUndo: found record. Team = 49333, semaphoreSetID = 1, semaphoreNumber = 2, value = 0
xsi_semop: semaphoreID = 1, ops = 0x7ffee7f0, numOps = 1
xsi_semop: semaphoreNumber = 2, value = 0
xsi_semop: semaphore acquired succesfully
XsiSemaphore::RecordUndo: found record. Team = 49333, semaphoreSetID = 1, semaphoreNumber = 2, value = -1
xsi_semop: semaphoreID = 1, ops = 0x7ffee85c, numOps = 1
xsi_semop: semaphoreNumber = 3, value = 1
xsi_semop: semaphore acquired succesfully
XsiSemaphore::RecordUndo: found record. Team = 49333, semaphoreSetID = 1, semaphoreNumber = 3, value = 0
xsi_semop: semaphoreID = 1, ops = 0x7ffee85c, numOps = 1
xsi_semop: semaphoreNumber = 3, value = 0
xsi_semop: semaphore acquired succesfully
XsiSemaphore::RecordUndo: found record. Team = 49333, semaphoreSetID = 1, semaphoreNumber = 3, value = -1
xsi_semop: semaphoreID = 1, ops = 0x7ffee35c, numOps = 1
xsi_semop: semaphoreNumber = 3, value = 1
xsi_semop: semaphore acquired succesfully
XsiSemaphore::RecordUndo: found record. Team = 49333, semaphoreSetID = 1, semaphoreNumber = 3, value = 0
xsi_semop: semaphoreID = 1, ops = 0x7ffee35c, numOps = 1
xsi_semop: semaphoreNumber = 3, value = 0
xsi_semop: semaphore acquired succesfully
XsiSemaphore::RecordUndo: found record. Team = 49333, semaphoreSetID = 1, semaphoreNumber = 3, value = -1
xsi_semop: semaphoreID = 1, ops = 0x7ffee87c, numOps = 1
xsi_semop: semaphoreNumber = 6, value = 1
xsi_semop: semaphore acquired succesfully
XsiSemaphore::RecordUndo: found record. Team = 49333, semaphoreSetID = 1, semaphoreNumber = 6, value = 0
xsi_semctl: semaphoreID = 1, semaphoreNumber = 7, command = 4
xsi_semctl: semaphoreID = 1, semaphoreNumber = 0, command = 0
XsiSemaphoreSet::~XsiSemaphoreSet(): removing semaphore set
XsiSemaphore::ClearUndos: semaphoreSetID = 1, semaphoreNumber = 0
XsiSemaphore::ClearUndos: semaphoreSetID = 1, semaphoreNumber = 1
XsiSemaphore::ClearUndos: semaphoreSetID = 1, semaphoreNumber = 2
XsiSemaphore::ClearUndos: semaphoreSetID = 1, semaphoreNumber = 3
XsiSemaphore::ClearUndos: semaphoreSetID = 1, semaphoreNumber = 4
XsiSemaphore::ClearUndos: semaphoreSetID = 1, semaphoreNumber = 5
XsiSemaphore::ClearUndos: semaphoreSetID = 1, semaphoreNumber = 6
XsiSemaphore::ClearUndos: semaphoreSetID = 1, semaphoreNumber = 7
PANIC: _mutex_lock(): double lock of 0x90eb7318 by thread 49333
Welcome to Kernel Debugging Land...
Thread 49333 "mono" running on CPU 0
kdebug> bt
stack trace for thread 49333 "mono"
    kernel stack: 0x93f30000 to 0x93f34000
      user stack: 0x7efef000 to 0x7ffef000
frame            caller     <image>:function + offset
 0 93f336f4 (+  48) 80054379   <kernel>:invoke_debugger_command + 0x00ed
 1 93f33724 (+  64) 80054171   <kernel>:invoke_pipe_segment__FP21debugger_command_pipelPc + 0x0079
 2 93f33764 (+  64) 800544b9   <kernel>:invoke_debugger_command_pipe + 0x009d
 3 93f337a4 (+  48) 80055394   <kernel>:_ParseCommandPipe__16ExpressionParserRi + 0x0234
 4 93f337d4 (+  48) 80054d4a   <kernel>:EvaluateCommand__16ExpressionParserPCcRi + 0x01de
 5 93f33804 (+ 224) 80056760   <kernel>:evaluate_debug_command + 0x0088
 6 93f338e4 (+  64) 800527ee   <kernel>:kernel_debugger_loop__Fv + 0x01ae
 7 93f33924 (+  48) 80053387   <kernel>:kernel_debugger + 0x0117
 8 93f33954 (+ 192) 80053265   <kernel>:panic + 0x0029
 9 93f33a14 (+  64) 8003765c   <kernel>:_mutex_lock + 0x00e0
10 93f33a54 (+  64) 8002e58f   <kernel>:file_cache_set_size + 0x003b
11 93f33a94 (+  48) 80593878   <bfs>:SetFileSize__5InodeR11Transactionx + 0x00bc
12 93f33ac4 (+ 352) 80593a5d   <bfs>:Free__5InodeR11Transaction + 0x0029
13 93f33c24 (+  80) 8059c343   <bfs>:bfs_remove_vnode__FP9fs_volumeP8fs_vnodeb + 0x00b3
14 93f33c74 (+  48) 80083b5f   <kernel>:free_vnode__FP5vnodeb + 0x00af
15 93f33ca4 (+  48) 80083cf5   <kernel>:dec_vnode_ref_count__FP5vnodebT1 + 0x0119
16 93f33cd4 (+  32) 800883ea   <kernel>:vfs_put_vnode + 0x001e
17 93f33cf4 (+  32) 8002f899   <kernel>:ReleaseStoreRef__12VMVnodeCache + 0x0015
18 93f33d14 (+  48) 800abaf7   <kernel>:RemoveArea__7VMCacheP7vm_area + 0x006f
19 93f33d44 (+  64) 800a4cbb   <kernel>:delete_area__FP16vm_address_spaceP7vm_area + 0x009b
20 93f33d84 (+  48) 800a6ba9   <kernel>:vm_delete_areas + 0x0079
21 93f33db4 (+  48) 800aa6bb   <kernel>:vm_delete_address_space + 0x0033
22 93f33de4 (+  96) 80049531   <kernel>:team_delete_team + 0x0245
23 93f33e44 (+ 128) 8004cb4c   <kernel>:thread_exit + 0x0364
24 93f33ec4 (+  64) 80042d00   <kernel>:handle_signals + 0x0384
25 93f33f04 (+  64) 8004cf4e   <kernel>:thread_at_kernel_exit + 0x0076
26 93f33f44 (+ 100) 800b69cb   <kernel>:kernel_exit_handle_signals + 0x0006 (nearest)
user iframe at 0x93f33fa8 (end = 0x93f34000)
 eax 0x0            ebx 0x6beb38        ecx 0x7ffee928   edx 0x200246
 esi 0x6c2760       edi 0x1             ebp 0x7ffee954   esp 0x93f33fdc
 eip 0xffff0104  eflags 0x200217   user esp 0x7ffee928
 vector: 0x63, error code: 0x0
27 93f33fa8 (+   0) ffff0104
7ffee954 -- read fault
kdebug> continue

Trying to continue results in a freeze, without any further serial output.

Change History (6)

comment:1 by axeld, 16 years ago

Component: - GeneralSystem/Kernel

Can you use the "mutex" command with the lock mentioned in the KDL message?

comment:2 by andreasf, 16 years ago

There you go:

PANIC: _mutex_lock(): double lock of 0x9125b660 by thread 48517
Welcome to Kernel Debugging Land...
Thread 48517 "mono" running on CPU 0
kdebug> mutex 0x9125b660
mutex 0x9125b660:
  name:            vm_cache
  flags:           0x0
  holder:          48517
  waiting threads:
kdebug> 

Anything else?

comment:3 by andreasf, 16 years ago

As requested:

kdebug> thread 48517
THREAD: 0x914b5000
id:                 48517 (0xbd85)
name:               "mono"
all_next:           0x914a4800
team_next:          0x913ef800
q_next:             0x9143c000
priority:           20 (next 20)
state:              running
next_state:         ready
cpu:                0x80113480 (0)
sig_pending:        0x100020 (blocked: 0x20)
in_kernel:          1
fault_handler:      0x80054394
args:               0x912d8f00 0x00000000
entry:              0x80047df8
team:               0x90b7d000, "kernel_team"
  exit.sem:         464406
  exit.status:      0x1 (No Error (1))
  exit.reason:      0x1
  exit.signal:      0x0
  exit.waiters:
        0x95ffbe3c (group -1778401660, thread -1866996320)
kernel_stack_area:  1039902
kernel_stack_base:  0x95ffc000
user_stack_area:    -1
user_stack_base:    0x7efef000
user_local_storage: 0x7ffef000
kernel_errno:       0x0 (No error)
kernel_time:        553867
user_time:          38783
flags:              0x1
architecture dependant section:
        esp: 0x95fffc3c
        ss: 0x95ff0010
        fpu_state at 0x914b5380
kdebug>

comment:4 by bonefish, 16 years ago

Owner: changed from axeld to bonefish
Status: newassigned

comment:5 by bonefish, 16 years ago

Resolution: fixed
Status: assignedclosed

Should be fixed in hrev26849. I don't have the test case, so please reopen if you encounter it again.

comment:6 by andreasf, 16 years ago

Gone after applying your patch to hrev26820.

Note: See TracTickets for help on using tickets.