Opened 16 years ago

Closed 15 years ago

#3808 closed bug (fixed)

Copying or creating large files causes General System Error

Reported by: haiqu Owned by: nobody
Priority: normal Milestone: R1
Component: File Systems/BFS Version: R1/pre-alpha1
Keywords: Cc:
Blocked By: Blocking:
Platform: x86

Description

Since rebuilding about 12 hours ago I've noticed lockups when creating haiku.image, when copying large files from another drive and when trying to create a CD from a 700Mb image file. In the situation where I'm copying from another drive I get a General System Error and the drive becomes Read-Only. Doing checkfs /Drivename doesn't fix it, but thankfully I can boot from BeOS and run chkbfs, which does.

Sorry I can't tell you the exact revision of the buuild but I'm back on BeOS fixing the drive yet again. :(

Change History (14)

comment:1 by marcusoverhagen, 16 years ago

Owner: changed from marcusoverhagen to nobody

comment:2 by mmlr, 16 years ago

This indicates a BFS problem. Is it possible that you almost run out of free space on that volume? Due to the structure that BFS uses it's possible that there is indeed free space left (even up to some tens/hundreds of MBs I think) but cannot be allocated for the file you try to create. In the syslog (you have to enter KDL and use the syslog command from there if this happens to your boot volume as the syslog on the boot volume can't be written to anymore) you'll probably see some more info regarding why BFS decided it was safer to go into read-only mode. If you could find that info and post it here that'd be useful.

comment:3 by umccullough, 16 years ago

Component: Drivers/DiskFile Systems/BFS

Per mmlr's comment, probably best to change the component (feel free to change back if this is wrong!)

comment:4 by axeld, 16 years ago

Without any extra information, I certainly can't do anything about this. The debug output (via serial debug output or the syslog, as mmlr said) is really the minimal information to at least get an idea what happened.

Generally, BFS switches to read-only mode if it encounters a fatal error on disk. "checkfs" therefore can't fix this, as writing is not allowed on this disk.

comment:5 by mmlr, 16 years ago

My personal guess (since I happened to encounter the same yesterday) is that the error is "invalid block run 0, 0, 0". My scenario was a wrong argument to dd. I mixed up seek and skip resulting in an file of 4GB to be created instead of reading at a 4GB offset in the source. Since my disk only had a few hundred megs of free space I knew it wouldn't be able to handle it, but it didn't allow me to kill it in any way, so I just left it running. The result was that after finally aborting the volume was read only with the above message as last output from BFS. Maybe there's a bug hiding in the out of space case?

comment:6 by haiqu, 16 years ago

I'm not running into an out-of-space problem. Copying a 700Mb image files onto a drive with over 1.5Gb fails at 29Mb every time, and trying to create an image under Haiku locks the machine with the CPU running at 100%. Even with a fault in the original file, checkfs should have been able to fix the problem anyhow. So this is really two errors.

Did we change to the new ata bus manager recently? I might switch back and see what happens.

in reply to:  6 comment:7 by umccullough, 16 years ago

Replying to haiqu:

Did we change to the new ata bus manager recently? I might switch back and see what happens.

Nope, not yet. You have to change the two instances of "ide" in your build/jam/HaikuImage with "ata" to change to it. At least here, I've been using the new ata bus_manager on several machines now and it's working wonderfully :)

comment:8 by haiqu, 16 years ago

Axel suggested I try it in haiku-development a few weeks ago, but my machine wouldn't boot. :(

OK, so that possibility is eliminated as the source of the problem.

comment:9 by axeld, 16 years ago

Ah well, if you can reproduce BFS bugs that easily, then please enable the tracing mechanisms (in build/config_headers/tracing_config.h, BFS_TRACING to 1, BLOCK_CACHE_BLOCK_TRACING to 2, BLOCK_CACHE_TRANSACTION_TRACING to 1), and don't forget to enlarge the trace log (MAX_TRACE_SIZE to 200 MB or more), and then reproduce the problem. Once done, please send me the trace output :-)

I'm trying to reproduce the bugs since weeks, but not very successful yet :-/

comment:10 by haiqu, 16 years ago

I downgraded sources to hrev30300 and rebuilt last night and the fault was still there. It wasn't there in hrev30233. That should give you an idea where to look ...

It can be reproduced easily. Just try to burn a CD.

comment:11 by haiqu, 16 years ago

Can now copy large files onto Haiku from other drives, but still cannot create a CD image using:

jam -q haiku-image

The build freezes the machine at the beginning of the write, with CPU at maximum smoke. Only a hard reboot will cure it.

My copy of mkisofs creates boot images just fine and a build of mkisofs from the same sources works in BeOS 5.1d0 to create a full image. May not be a file system problem, I'm suspecting memory management, which underwent a lot of changes just prior to this fault arising.

comment:12 by haiqu, 16 years ago

See also #3932

comment:13 by haiqu, 15 years ago

Haven't had a problem with this since changing to a faster computer. My previous development box was a Duron 800 and evidently Haiku runs out of puff on that. Using the Athlon 64 3000+ it just works.

comment:14 by axeld, 15 years ago

Resolution: fixed
Status: newclosed

This is probably related to the problems of double indirect blocks I fixed in hrev31767. In any case, there isn't enough information to assume anything else.

Note: See TracTickets for help on using tickets.