Opened 13 years ago

Closed 13 years ago

#8109 closed bug (fixed)

Memory usage slowly rising in About

Reported by: luroh Owned by: mmlr
Priority: normal Milestone: R1
Component: System/Kernel Version: R1/Development
Keywords: Cc:
Blocked By: Blocking:
Platform: All

Description

hrev43219, gcc2, VMware

Not sure whether this is a bug or something else at play.

Sometime pre-BG024, the memory usage in the 'About' app used to start around 100 MB and work its way down to 75 MB or so over the course of a few minutes after booting. Lately though, it starts at 100 MB and climbs to 110 MB in about seven minutes.

Change History (16)

comment:1 by anevilyak, 13 years ago

Resolution: invalid
Status: newclosed

Not a bug, the memory usage in About was previously reported incorrectly since it ignored inactive pages (c.f. ticket #7714). Thus, as pages would get marked inactive, the usage would slowly drop even though the memory was still allocated.

comment:2 by axeld, 13 years ago

But that does really only explain the change, not that it now does slowly increase - IOW while this might be perfectly normal, it could also hint at a memory leak somewhere that got unnoticed because of the wrong attribution before.

Do you do anything in these 7 minutes? Does it continue to increase in that order over time?

comment:3 by ddavid123, 13 years ago

I am having the same issue in hrev43192. About Haiku shows Haiku using about 30% more memory than before and the usage ticks upward! Before it would slowly fall to around 60 megs given 10 minutes of no activity! Now it ticks upward while idle with only About Haiku running!

Last edited 13 years ago by ddavid123 (previous) (diff)

in reply to:  2 comment:4 by anevilyak, 13 years ago

Resolution: invalid
Status: closedreopened

Replying to axeld:

But that does really only explain the change, not that it now does slowly increase - IOW while this might be perfectly normal, it could also hint at a memory leak somewhere that got unnoticed because of the wrong attribution before.

True. Ingo and Michael were actually trying to track down something similar during most of the code sprint week (hence all of the additional memory-related kernel debugging features that were added). In any case, while the possibility of a leak is certainly there, the fact that the numbers have jumped as a whole isn't surprising at all since the listed usage was basically lying before that change.

comment:5 by bonefish, 13 years ago

Yeah, we noticed that the used memory number slowly but steadily increases. That was already the case before you fixed the number and it doesn't seem to have changed. The allocation tracking in the slab allocator showed us that the leak isn't memory allocated through the slab or the MemoryManager. Hence Michael added allocation tracking for pages. He mentioned that he did a quick run and saw that the pages were allocated by the page fault handler, which isn't all that surprising. It's definitely worth examining that issue further. Maybe it's even related to the VM related crashes that some people still seem to encounter quite often.

in reply to:  2 comment:6 by luroh, 13 years ago

Replying to axeld:

Do you do anything in these 7 minutes? Does it continue to increase in that order over time?

Nope, nothing, I just start About and observe. It seems to slow down somewhat:
100 MB - 0 min
110 MB - 7 min
112 MB - 30 min
115 MB - 55 min
116 MB - 70 min
117 MB - 80 min

comment:7 by luroh, 13 years ago

...
118 MB - 90 min
119 MB - 98 min
120 MB - 105 min

So up to that point, about 1 MB every 10 minutes, or 1.5 kB/s.

Then, just for the heck of it, I started continuously moving the mouse pointer around the screen in circles and, lo and behold, the memory usage rose to 125 MB in just two minutes.

comment:8 by beos_zealot, 13 years ago

Haiku up-time was about 3-4 hours, preliminary list of running apps:

BeZilla Browser: ~15 tabs
WebPositive: ~5 tabs
Pe: ~10 files
Terminal: 2 instances x 2-3 tabs in each
StyledEdit: ~ 5 files
TextSearch: ~ 5 instances
~ 10 opened directories in Tracker
MediaPlayer (+ opened playlist)

As memory usage shown in About window drastically increased over time, just out of curiosity i looked at Process Controller -> Memory usage. I was surprised, or maybe i never noticed before, that app_server consumes almost 500MB RAM. Is it normal?

I can double check this if it's irregular.

Last edited 13 years ago by beos_zealot (previous) (diff)

comment:9 by mmlr, 13 years ago

Owner: changed from axeld to mmlr
Status: reopenedin-progress

I'm going to look further into it. As mentioned I already did look briefly, but more info is needed still.

comment:10 by mmlr, 13 years ago

Ok, I've investigated this further and the problem is in the "fix" of #7714, so in hrev43168. The inactive pages aren't necessarily unmapped, so adding them to gMappedPagesCount isn't correct, as that one already contains some of these.

comment:11 by bonefish, 13 years ago

Without looking at the code, I would say that inactive pages actually are unmapped. Besides, we saw the increasing memory use even before Rene's change.

in reply to:  11 comment:12 by mmlr, 13 years ago

Replying to bonefish:

Without looking at the code, I would say that inactive pages actually are unmapped. Besides, we saw the increasing memory use even before Rene's change.

At least ASSERT(!page->IsMapped()) immediately triggerd in set_page_state() for pages to be added to the inactive queue. Note that IsMapped() is true for pages having mappings and for ones being wired and I haven't fully figured out what's actually happening aside from the above. I've tracked down and verified that the maintenance of gMappedPagesCount is based on IsMapped() (and is only updated going from not mapped to mapped and vice versa). So that only left the inactive queue not consisting fully of unmapped pages, quickly confirmed by the mentioned assert. I'm still looking into it, just wanted to update with my findings if someone wanted to chime in.

comment:13 by mmlr, 13 years ago

Ok, adding further debug output reveals that the pages added to the inactive queue that still have mappings are (at least to a large degree) libraries (libroot, libbe, etc. segments). They are still mapped into the areas owned by their respective teams. They are moved there by the page daemon idle scan. Is that not supposed to happen? Looks fine to me in principle, just that it obviously breaks the assumption that the reporting change in hrev43168 is based on.

comment:14 by bonefish, 13 years ago

Yeah, you're right. I just looked at the code and indeed there are several cases in which the page daemon moves mapped pages to the inactive list. For pages from non-temporary caches that is desirable. For wired pages that is probably OK, too (depends on whether the code that unwires pages handles it correctly -- haven't checked). The logic doesn't seem to be quite correct for pages whose usage count was > 0, though since ATM kPageUsageDecline is 1, that doesn't harm either.

So, yes, things look in order and adding the inactive page count to the mapped page count indeed isn't correct. A more correct value for the used page count would be active + inactive + wired + modified temporary, but I might be missing something.

comment:15 by mmlr, 13 years ago

Fixed in hrev43258. I opted to go the subtractive route, as all the needed counters for that are already in place, the result is the same anyway.

comment:16 by mmlr, 13 years ago

Resolution: fixed
Status: in-progressclosed
Note: See TracTickets for help on using tickets.