Opened 10 years ago

Last modified 7 years ago

#10637 assigned bug

memory issue with mounted image files

Reported by: jessicah Owned by: nobody
Priority: normal Milestone: R1
Component: System/Kernel Version: R1/Development
Keywords: Cc:
Blocked By: Blocking:
Platform: All

Description

I have an 8GB bfs formatted image file mounted, and my system partition has about 3GB free. My system also has 8GB of RAM. I've also disabled Virtual Memory, but the problem also occurred with Virtual Memory enabled.

It seems like writes to the mounted image file are causing some weird issues in the file caching layer. And even though looking at process controller, the memory usage never seems to appear to exceed 500MB, large writes to the mounted image causes the system to think there is no memory left.

Also, the kernel's "low resource handler" thread hammers a single core at near 100% utilisation.

Gets into a state where even reads from the mounted image file appear to be corrupted.

E.g.

/Cabinet/webkit> git checkout HEAD --force
error: failed to read object 3e482c427f57a47f262816900de4ca655b322792 at offset 98195304 from .git/objects/pack/pack-1d8534f61de156ee015c1949b78755305f4e2ebb.pack
fatal: packed object 3e482c427f57a47f262816900de4ca655b322792 (stored in .git/objects/pack/pack-1d8534f61de156ee015c1949b78755305f4e2ebb.pack) is corrupt
/Cabinet/webkit> git pull
bash: /bin/git: Out of memory

After a reboot, git no longer complains about corrupted files the first time through (of course, a few file system operations later, what once worked ends up being reported as corrupted again).

Attachments (1)

syslog (164.7 KB ) - added by jessicah 10 years ago.

Download all attachments as: .zip

Change History (4)

by jessicah, 10 years ago

Attachment: syslog added

comment:1 by pulkomandy, 10 years ago

It's possible to hit this also with real hard drives. See my comments in #5777, rsyncing from a (fast) SSD to a (slower) USB HDD has similar results.

comment:2 by jessicah, 10 years ago

Possibly a problem with the file cache not attempting to commit writes early enough? Can be reproduced fairly easily using dd if=/dev/urandom of=file.of.junk bs=1M count=8192 and watching the cache consistently climb with no apparent effort to empty it. Changing the dd line, adding oflag=direct appears to do the correct thing and bypass the cache altogether, and things hum along quite nicely in this particular instance.

And it's not a write speed issue either. It simply doesn't do any writing. The speeds I get using dd with the direct flag are similar if not the same as without the direct flag and simply writing to cache.

Given the behaviour with dd filling up the cache, I would suspect my issues with the mounted bfs image file are eerily similar.

comment:3 by axeld, 7 years ago

Owner: changed from axeld to nobody
Status: newassigned
Note: See TracTickets for help on using tickets.