Opened 8 years ago
Last modified 8 years ago
#13160 assigned bug
Debugger crashes when loading debuginfo
Reported by: | KapiX | Owned by: | nobody |
---|---|---|---|
Priority: | normal | Milestone: | Unscheduled |
Component: | System/libroot.so | Version: | R1/Development |
Keywords: | Cc: | ||
Blocked By: | Blocking: | ||
Platform: | All |
Description
Repro package: http://haiku.kacperkasper.pl/instdir.zip (356 MB)
cd program
LIBRARY_PATH=.:$LIBRARY_PATH svdemo
svdemo
crashes. Click Debug.- Debugger will crash while loading debuginfo after some time.
export LIBRARY_PATH=.:$LIBRARY_PATH
LD_PRELOAD=libroot_debug.so MALLOC_DEBUG=g Debugger svdemo
svdemo
crashes. Click Debug.- Debugger crashes almost immediately.
hrev50794 gcc5h
Reports attached.
Attachments (2)
Change History (8)
by , 8 years ago
Attachment: | Debugger-1095-debug-08-01-2017-07-21-42.report added |
---|
by , 8 years ago
Attachment: | libroot_debug.report added |
---|
comment:1 by , 8 years ago
comment:2 by , 8 years ago
I was going to ask the same, this looks nearly identical to what occurs when attempting to debug WebKit on 32-bit, where we exhaust the address space while parsing the info. It may or may not also help to specify -fdebug-types-section in your debug build flags for your app.
comment:3 by , 8 years ago
The issue with trying 64-bit is that it requires some more work without getting actual results. LibreOffice needs a long list of libraries present to even start compiling, and then the compilation takes around 6 hours on my machine.
I'll try stripping and keeping two sets of libraries - with and without debuginfo - and use debuginfo from one at a time. Maybe that will get me somewhere.
This should be fixed nonetheless because I don't think one failed allocation should bring the whole process down.
Bonus question: why do I see memory exhaustion when not even 50% of total available memory is allocated? I know there are limits of allocated memory for one process, but isn't it around 2 GB mark?
comment:4 by , 8 years ago
It's more complicated than that: the userland of a single process has 2gb of address space to work with. Note that this is not entirely equivalent to having 2gb of memory to work with since, e.g. the area mappings for the various shared libraries the app uses take up address space despite not taking additional memory for the code segments that are common across apps. In addition, there are other area mappings that are needed for things like system calls, which again take up address space without taking up memory. All of this also potentially results in address space fragmentation such that an area allocation may fail simply due to there not being a large enough contiguous chunk to satisfy the request. The latter may be what's happening with hoard, but at the moment it's difficult to tell for sure.
-fdebug-types-section would be worth trying on 32-bit either way, since it allows gcc to eliminate a lot of redundant debug info, which may lower the memory requirements (this option is disabled by default since not all versions of various open source dev tools know how to grok .debug_types).
comment:5 by , 8 years ago
Component: | Applications/Debugger → System/libroot.so |
---|---|
Owner: | changed from | to
Moving to libroot since this appears to be an issue with either hoard itself, or our backend. The codepaths in question on the debugger side use std::nothrow and check the allocation result, so the expected behavior would be for the allocation to fail and return NULL.
comment:6 by , 8 years ago
Owner: | changed from | to
---|---|
Status: | new → assigned |
Replying to KapiX:
The report with libroot_debug shows a debugger call due to failure to allocate a new memory area (i.e. the system's out of memory or the application ran out of address space). Considering that using libroot_debug with the guarded heap consumes a lot of extra memory, this isn't too surprising.
It is possible that the original crash was also due to an out of memory/address space situation since both stack traces seem to be in the path of hoard resizing. The handling of such situations should then be reviewed.
To avoid running out of address space you could try reproducing on a 64 bit installation instead. If the used memory indication in the report is correct, available memory shouldn't be the problem.