Opened 17 years ago
Closed 16 years ago
#1970 closed bug (fixed)
VM's sAvailableMemory counter is possibly inaccurate
Reported by: | anevilyak | Owned by: | axeld |
---|---|---|---|
Priority: | critical | Milestone: | R1/alpha1 |
Component: | System/Kernel | Version: | R1/pre-alpha1 |
Keywords: | Cc: | bga | |
Blocked By: | Blocking: | ||
Platform: | All |
Description
As exposed by the new way of doing get_system_info(), the number given by sAvailableMemory appears to be incorrect. After doing something involving heavy amounts of I/O / memory allocations, (i.e. a large svn checkout or attempting a Haiku build from within Haiku), this number steadily grows until it's significantly larger than the amount of physical RAM in the system. As an example, after attempting the build, sAvailableMemory reports back 4.8GB on a system that has only 1GB of physical RAM. Even accounting for overcommits, this seems far too high, especially given lack of swap.
Attachments (2)
Change History (17)
comment:1 by , 17 years ago
comment:2 by , 17 years ago
Cc: | added |
---|
comment:3 by , 17 years ago
Looking at vm.cpp, it is obvious that sAvailableMemory can grown indefinitely if we keep calling vm_unreserve_memory() (or if we call it with a big enough value for amount, of course). As the code there is pretty straight-forward, I am convinced that this would be the case:
void vm_unreserve_memory(size_t amount) {
benaphore_lock(&sAvailableMemoryLock);
sAvailableMemory += amount;
benaphore_unlock(&sAvailableMemoryLock);
}
We have no check whatsoever for an upper bound value here.
Another possibility is that vm_try_reserve_memory is not being called at some point when it should.
Anyway, I added a panic() to vm_unreserve_memory() in case sAvailableMemory was bigger than the available physical memory. See attached screen shots of 2 code paths that resulted in the error. They do not look very helpful, unfortunately.
comment:4 by , 17 years ago
Is it reasonable to assume that a reservation of a given size should always be paired with a corresponding free of the same size? or is it possible that they could be aggregated?
I.e. Is it safe to assume that we'll see a pattern like this:
Reserve 500 Reserve 200 Unreserve 200 Unreserve 500
or is Unreserve 700 a possibility here? I'm wondering if adding some dprintfs to track the source and size of each reservation/unreservation might help find where we're not reserving when we should here, since our problem is evidently unreserving more than we reserve.
comment:5 by , 17 years ago
Reserving and unreserving does *not* happen in pairs. The only thing that has to happen is that they need to be balanced in the end - but that's obviously not what happens in some situations :-)
I guess the best way would be to see where the commitment changes, and where sAvailableMemory is not properly updated. It might not be simple as the commitment cannot be handled equally for all types of caches.
comment:6 by , 17 years ago
Fyi, the problem is reproducible with the following relatively simple test app:
#include <kernel/OS.h>
#include <stdio.h>
int main(int argc, char argv) {
printf("Allocating area of 1337 bytes\n"); void *addr = NULL; area_id id = create_area("rgtest", &addr, B_ANY_ADDRESS, 1337,
B_NO_LOCK, B_READ_AREA);
if (id > 0)
printf("Area allocated successfully, id %ld, addr: %p\n", id, addr);
printf("Destroying area of 1337 bytes\n"); delete_area(id);
return 0;
}
running this app results in the following pattern of reserve/unreserves:
KERN: try to reserve 90112 bytes, 1003679744 left KERN: Reserve of 90112 bytes succeeded, 1003589632 left KERN: try to reserve 8192 bytes, 1003589632 left KERN: Reserve of 8192 bytes succeeded, 1003581440 left KERN: try to reserve 4096 bytes, 1003581440 left KERN: Reserve of 4096 bytes succeeded, 1003577344 left KERN: try to reserve 65536 bytes, 1003577344 left KERN: Reserve of 65536 bytes succeeded, 1003511808 left KERN: try to reserve 4096 bytes, 1003511808 left KERN: Reserve of 4096 bytes succeeded, 1003507712 left KERN: try to reserve 577536 bytes, 1003507712 left KERN: Reserve of 577536 bytes succeeded, 1002930176 left KERN: try to reserve 28672 bytes, 1002930176 left KERN: Reserve of 28672 bytes succeeded, 1002901504 left KERN: try to reserve 4096 bytes, 1002901504 left KERN: Reserve of 4096 bytes succeeded, 1002897408 left KERN: try to reserve 638976 bytes, 1002897408 left KERN: Reserve of 638976 bytes succeeded, 1002258432 left KERN: try to reserve 32768 bytes, 1002258432 left KERN: Reserve of 32768 bytes succeeded, 1002225664 left KERN: try to reserve 204800 bytes, 1002225664 left KERN: Reserve of 204800 bytes succeeded, 1002020864 left KERN: try to reserve 339968 bytes, 1002020864 left KERN: Reserve of 339968 bytes succeeded, 1001680896 left KERN: Unreserved 0 bytes, 1001680896 left KERN: try to reserve 16384 bytes, 1001680896 left KERN: Reserve of 16384 bytes succeeded, 1001664512 left KERN: try to reserve 4096 bytes, 1001664512 left KERN: Reserve of 4096 bytes succeeded, 1001660416 left KERN: try to reserve 4096 bytes, 1001660416 left KERN: Reserve of 4096 bytes succeeded, 1001656320 left KERN: try to reserve 4096 bytes, 1001656320 left KERN: Reserve of 4096 bytes succeeded, 1001652224 left KERN: Unreserved 90112 bytes, 1001742336 left KERN: Unreserved 8192 bytes, 1001750528 left KERN: Unreserved 0 bytes, 1001750528 left KERN: Unreserved 4096 bytes, 1001754624 left KERN: Unreserved 0 bytes, 1001754624 left KERN: Unreserved 65536 bytes, 1001820160 left KERN: Unreserved 0 bytes, 1001820160 left KERN: Unreserved 4096 bytes, 1001824256 left KERN: Unreserved 0 bytes, 1001824256 left KERN: Unreserved 577536 bytes, 1002401792 left KERN: Unreserved 28672 bytes, 1002430464 left KERN: Unreserved 0 bytes, 1002430464 left KERN: Unreserved 4096 bytes, 1002434560 left KERN: Unreserved 0 bytes, 1002434560 left KERN: Unreserved 638976 bytes, 1003073536 left KERN: Unreserved 32768 bytes, 1003106304 left KERN: Unreserved 0 bytes, 1003106304 left KERN: Unreserved 204800 bytes, 1003311104 left KERN: Unreserved 0 bytes, 1003311104 left KERN: Unreserved 339968 bytes, 1003651072 left KERN: Unreserved 0 bytes, 1003651072 left KERN: Unreserved 12288 bytes, 1003663360 left KERN: Unreserved 0 bytes, 1003663360 left KERN: try to reserve 4096 bytes, 1003663360 left KERN: Reserve of 4096 bytes succeeded, 1003659264 left KERN: try to reserve 90112 bytes, 1003659264 left KERN: Reserve of 90112 bytes succeeded, 1003569152 left KERN: try to reserve 8192 bytes, 1003569152 left KERN: Reserve of 8192 bytes succeeded, 1003560960 left KERN: try to reserve 4096 bytes, 1003560960 left KERN: Reserve of 4096 bytes succeeded, 1003556864 left KERN: try to reserve 4096 bytes, 1003556864 left KERN: Reserve of 4096 bytes succeeded, 1003552768 left KERN: try to reserve 65536 bytes, 1003552768 left KERN: Reserve of 65536 bytes succeeded, 1003487232 left KERN: try to reserve 4096 bytes, 1003487232 left KERN: Reserve of 4096 bytes succeeded, 1003483136 left KERN: try to reserve 4096 bytes, 1003483136 left KERN: Reserve of 4096 bytes succeeded, 1003479040 left KERN: try to reserve 4096 bytes, 1003479040 left KERN: Reserve of 4096 bytes succeeded, 1003474944 left KERN: try to reserve 4096 bytes, 1003474944 left KERN: Reserve of 4096 bytes succeeded, 1003470848 left KERN: try to reserve 638976 bytes, 1003470848 left KERN: Reserve of 638976 bytes succeeded, 1002831872 left KERN: try to reserve 32768 bytes, 1002831872 left KERN: Reserve of 32768 bytes succeeded, 1002799104 left KERN: try to reserve 204800 bytes, 1002799104 left KERN: Reserve of 204800 bytes succeeded, 1002594304 left KERN: try to reserve 204800 bytes, 1002594304 left KERN: Reserve of 204800 bytes succeeded, 1002389504 left KERN: Unreserved 4096 bytes, 1002393600 left KERN: Unreserved 618496 bytes, 1003012096 left KERN: try to reserve 4096 bytes, 1003012096 left KERN: Reserve of 4096 bytes succeeded, 1003008000 left KERN: Unreserved 4096 bytes, 1003012096 left KERN: Unreserved 20480 bytes, 1003032576 left KERN: Unreserved 90112 bytes, 1003122688 left KERN: Unreserved 8192 bytes, 1003130880 left KERN: Unreserved 4096 bytes, 1003134976 left KERN: Unreserved 65536 bytes, 1003200512 left KERN: Unreserved 4096 bytes, 1003204608 left KERN: Unreserved 4096 bytes, 1003208704 left KERN: Unreserved 4096 bytes, 1003212800 left KERN: Unreserved 638976 bytes, 1003851776 left KERN: Unreserved 32768 bytes, 1003884544 left KERN: Unreserved 204800 bytes, 1004089344 left KERN: Unreserved 204800 bytes, 1004294144 left KERN: Unreserved 16384 bytes, 1004310528 left
Notice that this is more than we started with. There's also some other weirdness in there like attempts to unreserve 0 bytes for some reason.
Hope this helps.
comment:7 by , 17 years ago
Just a small further update, this same behavior occurs without the area calls. A simple app that does nothing more than a printf and return 0 also results in a higher sAvailableMemory count after exit than before running.
comment:8 by , 17 years ago
Resolution: | → fixed |
---|---|
Status: | new → closed |
Thanks for investigating! The problem should be fixed with hrev24742.
comment:9 by , 17 years ago
Priority: | normal → critical |
---|---|
Resolution: | fixed |
Status: | closed → reopened |
Obviously, I haven't fixed all problems: now, too much memory is reserved, so that the system runs out of memory over time (with plenty of free space left), as it thinks it cannot commit any more memory.
comment:10 by , 17 years ago
Yes, I noticed. When unzipping the Haiku the memory used continues to grow for each file extracted from the archive (grows at a rate of around 1 Mb/s). I think you overdid things. :)
comment:11 by , 17 years ago
Bruno brought up another good point / alternate explanation: what if we have a legitimate memory leak somewhere that was simply being masked by the previous errors with unreserving too much? It's possible the current sAvailableMemory logic is correct in that case.
comment:12 by , 17 years ago
I just did a simple test now and it does look like it is really cache related. After booting Haiku I had 90 Mb of memory in use (according to About System). Then I unzipped the haiku tree inside itself and after it finished, the memory usage jumped to 255 Mb. As zip is a single process and does not fork anything (basically just read and write files), This memory increase was probably due to cache memory that somehow was not correctly tracked.
comment:14 by , 16 years ago
I think this one might actually have been fixed by Ingo's changes to the low memory handler ... iirc he did double check about this and was seeing it correctly balancing. In any case it's a somewhat different issue from #2574 since the latter purely had to do with the statistics reported in get_system_info(), and wasn't really tied to the reservation mechanism that I'm aware of.
comment:15 by , 16 years ago
Resolution: | → fixed |
---|---|
Status: | reopened → closed |
I think the original problem this ticket was about was already fixed by Axel in hrev24742. After that kernel memory leaks remained, which have been fixed, and the running low on available (i.e. non-committed) memory didn't trigger the low memory handler (only when running low on free pages that happened), but this has also been fixed in the meantime.
I'm guessing this is tied to #1971, so that one's probably a dupe.