Opened 3 years ago
Last modified 3 years ago
#16626 new bug
Regression: malloc/new with large buffer >2Gb fails (x64)
Reported by: | smallstepforman | Owned by: | nobody |
---|---|---|---|
Priority: | high | Milestone: | Unscheduled |
Component: | System | Version: | R1/beta2 |
Keywords: | Cc: | mmlr | |
Blocked By: | Blocking: | ||
Platform: | All |
Description
In between hrev54154 (14th Sep 2020) and hrev54662 (21st Oct 2020), a regression was introduced which fails a c++ new (malloc) when requesting more than 2Gb of memory (x64 architecture, with 32Gb available memory).
Stack trace: _kern_debugger +0x7 abort + 0x4a llvm::report_bad_alloc_error(char const*, bool) + 0xea operator new(std::size-t) + 0x3a The file belongs to /sources/gcc-8.3.0/libstdc++-v3/libsupc++/new_op.cc
Requested buffer size was 2,053,316,608 bytes.
This has been working for a number of years until the regression was introduced between hrev54154 (last known working version) and hrev54662 (earliest detected error). Validated on 3 boxes (MacBookPro, Ryzen2700, Ryzen3700) with multiple haiku images (disk/usb/ssd/nvme).
Change History (2)
comment:1 by , 3 years ago
Cc: | added |
---|
comment:2 by , 3 years ago
It seems to fail here: threadheap.cpp:48. I can't find changes in hrev54154..hrev54662 that affect check above.
We really need to replace our default malloc(). Hopefully musl will get around to building the profiling tool they said they would so I can send them the logs of running Debugger and fix the performance issues.
This is probably somehow due to VM changes I guess.