Opened 5 years ago

Closed 4 years ago

#15083 closed bug (not reproducible)

MemoryManager: Too many free slabs after heavy usage

Reported by: waddlesplash Owned by: bonefish
Priority: normal Milestone: Unscheduled
Component: System/Kernel Version: R1/Development
Keywords: Cc: ttcoder
Blocked By: Blocking:
Platform: All

Description

  1. Do a lot of heavy filesystem access / use the system a lot (e.g. compiling large project as noted in #14451)
  2. Notice there are now a lot of "slab area"s.
  3. Drop into KDL and see via "slab_meta_chunks -c" that these are almost entirely free.

In my case there were 162 areas at 8MB each, for a total of some 1.3GB of memory usage (!!). Looking over the code, I'm not really sure how this happened; it looks like only 2 free chunks are supposed to be kept in general?

Change History (5)

comment:1 by ttcoder, 5 years ago

Cc: ttcoder added

comment:2 by AGMS, 5 years ago

Wonder if that fragments kernel memory too (especially detrimental in 32 bit mode), so a large contiguous allocation will then fail? Or will it garbage collect when it notices a lack of address space, rather than a lack of memory space? Worth testing.

comment:3 by waddlesplash, 5 years ago

No, there is no address space garbage collection, so this will indeed fragment kernel memory space pretty severely.

comment:4 by waddlesplash, 5 years ago

So from more investigation on this point, it seems what's going on here is allocator fragmentation. The "free" meta chunks are inside slabs with 1-2 meta chunks allocated; so there are a good number of 90% free slabs which cannot be released because the last 10% are actually in use. So this seems like allocator fragmentation indeed.

Likely my reorganizations of high-usage malloc()s into object_caches will help here. But it's hard to say for sure without even more tracing.

comment:5 by waddlesplash, 4 years ago

Resolution: not reproducible
Status: assignedclosed

A lot of the problem here was resolved by (1) using more object_caches, in packagefs and kernel VFS, (2) fixing the port message memory leak, and (3) mmlr's VM changes that fix leaking pages in cut areas for swap. Closing this as not reproducible.

Note: See TracTickets for help on using tickets.