Opened 16 years ago
Closed 16 years ago
#2720 closed bug (fixed)
AppServer deadlock
Reported by: | emitrax | Owned by: | axeld |
---|---|---|---|
Priority: | high | Milestone: | R1/alpha1 |
Component: | Servers/app_server | Version: | R1/pre-alpha1 |
Keywords: | Cc: | olive@… | |
Blocked By: | Blocking: | ||
Platform: | All |
Description
I selected a fairly large amount of emails (maybe 700), right click, get info to see what would happen and I got a nice deadlock almost immediately.
No threads were in the ready queue.
kdebug> sem 1142 SEM: 0x923bf7f8 id: 1142 (0x476) name: 'AppServerLink_sLock' owner: 81 count: -23 queue: 6122 5555 6098 6107 6130 5587 5589 5567 5600 5593 5557 5565 6111 5595 5583 81 5591 5608 5559 5626 5624 6038 158 last acquired by: 5561, count: 1 last released by: 5624, count: 1 kdebug> thread 81 THREAD: 0x9311f000 id: 81 (0x51) name: "Tracker" all_next: 0x913a7800 team_next: 0x00000000 q_next: 0x80113dc0 priority: 10 (next 10) state: waiting next_state: waiting cpu: 0x00000000 sig_pending: 0x0 (blocked: 0x0) in_kernel: 1 waiting for: semaphore 1142 ... kdebug> thread 5661 THREAD: 0x931c8800 id: 5661 (0x161d) name: "w>InfoWindow" all_next: 0x931a7000 team_next: 0x931b6800 q_next: 0x80113dc0 priority: 15 (next 15) state: waiting next_state: waiting cpu: 0x00000000 sig_pending: 0x0 (blocked: 0x0) in_kernel: 1 waiting for: semaphore 61236 fault_handler: 0x00000000 args: 0x002b1d4c 0x18304b98 entry: 0x00657688 team: 0x90b7fd14, "Tracker" exit.sem: 56765 exit.status: 0x0 (No error) exit.reason: 0x0 exit.signal: 0x0 exit.waiters: kernel_stack_area: 113944 kernel_stack_base: 0x9594c000 user_stack_area: 113945 user_stack_base: 0x78bef000 user_local_storage: 0x78c2f000 kernel_errno: 0x0 (No error) kernel_time: 30256 user_time: 36408 flags: 0x0 architecture dependant section: esp: 0x9594fd38 ss: 0x00000010 fpu_state at 0x931c8b80 kdebug> sem 61236 SEM: 0x9251a690 id: 61236 (0xef34) name: 'tmp_reply_port' owner: -1 count: -1 queue: 5661 last acquired by: 0, count: 0 last released by: 0, count: 0
Change History (5)
comment:1 by , 16 years ago
comment:2 by , 16 years ago
Never say never.
kdebug> bt 5661 stack trace for thread 5661 "w>InfoWindow" kernel stack: 0x9594c000 to 0x95950000 user stack: 0x78bef000 to 0x78c2f000 frame caller <image>:function + offset 0 9594fd94 (+ 32) 800439ce <kernel>:context_switch__FP6threadT0 + 0x0026 1 9594fdb4 (+ 64) 80043c38 <kernel>:scheduler_reschedule + 0x0248 2 9594fdf4 (+ 64) 80044f30 <kernel>:switch_sem_etc + 0x0368 3 9594fe34 (+ 64) 80044b9a <kernel>:acquire_sem_etc + 0x0026 4 9594fe74 (+ 80) 80041f68 <kernel>:_get_port_message_info_etc + 0x0104 5 9594fec4 (+ 80) 80041e55 <kernel>:port_buffer_size_etc + 0x0025 6 9594ff14 (+ 48) 80042b31 <kernel>:_user_port_buffer_size_etc + 0x008d 7 9594ff44 (+ 100) 800c8852 <kernel>:pre_syscall_debug_done + 0x0002 (nearest) user iframe at 0x9594ffa8 (end = 0x95950000) eax 0xc2 ebx 0x6e2cdc ecx 0x78c2e2d0 edx 0xffff0104 esi 0xffffffff edi 0x7fffffff ebp 0x78c2e2fc esp 0x9594ffdc eip 0xffff0104 eflags 0x212 user esp 0x78c2e2d0 vector: 0x63, error code: 0x0 8 9594ffa8 (+ 0) ffff0104 9 78c2e2fc (+ 48) 002b363f <libbe.so>:__cl__Q38BPrivate11BLooperList12FindPortPredRQ38BPrivate11BLooperList10LooperData + 0x015b (nearest) 10 78c2e32c (+ 128) 002b7126 <libbe.so>:_SendMessage__C8BMessagelllP8BMessagexx + 0x0176 11 78c2e3ac (+ 96) 002bd799 <libbe.so>:SendMessage__C10BMessengerP8BMessageT1xx + 0x0061 12 78c2e40c (+ 64) 002c5c2d <libbe.so>:SendTo__Q27BRoster7PrivateP8BMessageT1b + 0x0061 13 78c2e44c (+ 208) 003763d6 <libbe.so>:SetAppHint__9BMimeTypePC9entry_ref + 0x00fa 14 78c2e51c (+ 528) 002c378e <libbe.so>:_ResolveApp__C7BRosterPCcP9entry_refT2PcPUlPb + 0x01ea 15 78c2e72c (+ 64) 002c090e <libbe.so>:FindApp__C7BRosterPCcP9entry_ref + 0x003e 16 78c2e76c (+ 656) 0054baff <libtracker.so>:__Q28BPrivate13AttributeViewG5BRectPQ28BPrivate5Model + 0x0aa7 17 78c2e9fc (+ 128) 0054870b <libtracker.so>:Show__Q28BPrivate11BInfoWindow + 0x012f 18 78c2ea7c (+ 656) 00548d50 <libtracker.so>:MessageReceived__Q28BPrivate11BInfoWindowP8BMessage + 0x00f8 19 78c2ed0c (+ 48) 002b082f <libbe.so>:DispatchMessage__7BLooperP8BMessageP8BHandler + 0x005b 20 78c2ed3c (+ 480) 003582f9 <libbe.so>:DispatchMessage__7BWindowP8BMessageP8BHandler + 0x174d 21 78c2ef1c (+ 96) 0035bac4 <libbe.so>:task_looper__7BWindow + 0x0270 22 78c2ef7c (+ 48) 002b1d8b <libbe.so>:_task0___7BLooperPv + 0x003f 23 78c2efac (+ 48) 006576a8 <libroot.so>:_get_next_team_info + 0x005c (nearest) 24 78c2efdc (+ 0) 78c2efec 113945:w>InfoWindow_5661_stack@0x78bef000 + 0x3ffec
comment:3 by , 16 years ago
Cc: | added |
---|
comment:4 by , 16 years ago
Not sure what this stack trace is supposed to tell, at least it does not seem to pass any code that would need the app_server link lock.
comment:5 by , 16 years ago
Resolution: | → fixed |
---|---|
Status: | new → closed |
Fixed in hrev30517. It's not the perfect solution, though, as we should have a better backup plan.
Note:
See TracTickets
for help on using tickets.
Unfortunately the info isn't sufficient. "AppServerLink_sLock" belongs to a BLocker, which by default is used benaphore-style. That is, the current holder is not obvious; one would have to find out by checking stack traces. It might have been thread 5661, but we will never know.