Opened 17 years ago
Closed 17 years ago
#2148 closed bug (fixed)
BFS volume not fully written to
Reported by: | andreasf | Owned by: | axeld |
---|---|---|---|
Priority: | critical | Milestone: | R1/alpha1 |
Component: | File Systems/BFS | Version: | R1/pre-alpha1 |
Keywords: | Cc: | ||
Blocked By: | Blocking: | ||
Platform: | x86 |
Description
As discussed on the haiku-development list, I have a 4.0 GB BFS volume, second primary Intel partition, slave IDE device. I am unable to get more than ~2400 MB onto the volume.
Ways to reproduce:
i)
bzr branch http://bazaar-vcs.org/bzr/bzr.dev bzr.dev
results 100% reproducible (here during phase 2/5) in the KDL:
PANIC: block_cache: supposed to be clean block was changed!
Corresponding sc
(shortened):
[...] panic put_cached_block block_cache cached_block put_cached_block block_cache <kernel>block_cache_put <bfs>Unset CachedBlock _GrowStream Inode Transaction SetFileSize Inode Transaction WriteAt Inode Transaction bfs_write fs_volume fs_vnode <bfs>file_write file_descriptor common_user_io_FixPvUlb _user_write <kernel>pre_syscall_debug_done [iframe] _IO_new_file_write _IO_do_write _IO_new_file_xsputn <libroot.so>fwrite <python>PyFile_GetLine
(This KDL can be continue
'd.)
Bazaar 1.3.1 is based on Python >= 2.4 and requires _socket
and bz2
modules; to satisfy the latter, I compiled Python 2.5.x (stable) SVN branch using this patch, enabling said modules in Modules/Setup
. This patch was still considered work-in-progress but its incompleteness shouldn't be able to cause such a KDL.
To successfully run Bazaar, the hostname needs to be set, here using hostname test
.
ii) Copying a large source code folder (such as a successful bzr.dev
branch from my main volume) onto the volume (dragging from one window to another) results in a Tracker error message box (during Tracker Status progress window), showing a large negative system error code.
iii)
dd if=/dev/zero of=/ToBe/zerofile
stops and presents the error message
Read-only file system
After each of these ways to reproduce, unexpected high CPU usage remains, and the volume has become read-only - no further files can be written or removed. But the volume is neither shown as full, nor could it theoretically be full number-wise. For i) it was slightly over 2400 MB the first time, for ii) and iii) slightly above to 2380 MB.
In each case, chkbfs
on BeOS Max is able to repair the volume, it fixes block allocation mismatches
.
System is a real P4 with HyperThreading (SMP), 512 MB DDR RAM. firewire bus_manager was removed to avoid issues like on gcc4, no difference.
The partition was created using GParted on Linux, left unformatted, it is being shown as Linux native
(not BeOS
) in BeOS' DriveSetup. The partition was most likely initialized with Haiku's DriveSetup, using default values. Despite the apparently wrong partition type, BFS filesystem is always recognized correctly.
From bfsinfo -s /dev/disk/ide/ata/0/slave/0/0_1
on BeOS I get this twice, in the clean state:
disk_super_block: name = "ToBe" magic1 = 0x42465331 (BFS1) valid fs_byte_order = 0x42494745 (BIGE, little endian) block_size = 2048 block_shift = 11 num_blocks = 2096482 used_blocks = 1174951 inode_size = 2048 magic2 = 0xdd121031 (...1) valid blocks_per_ag = 4 ag_shift = 16 num_ags = 32 flags = 0x434c454e (CLEN) log_blocks = (0, 129, 2048) log_start = 1483 log_end = 1483 magic3 = 0x15b6830e root_dir = (8, 0, 1) indices = (0, 2177, 1)
Change History (7)
comment:1 by , 17 years ago
follow-up: 3 comment:2 by , 17 years ago
Milestone: | R1 → R1/alpha1 |
---|---|
Priority: | normal → critical |
The unrelated vm_page_faults should go into a separate bug report, I guess. I've fixed the problem that you encountered in i) in hrev25125.
follow-up: 4 comment:3 by , 17 years ago
comment:4 by , 17 years ago
The full chkbfs
output afterwards is this:
bfs: /dev/disk/ide/ata/0/slave/0/0_1 is read-only! Files processed: 166751 BFS has 21286 blocks allocated that should not be. Block allocation mismatches detected. Fixing. File system check completed.
Maybe this helps with simulating.
comment:5 by , 17 years ago
At least the "dd" problem should be fixed with hrev25249. Can you still reproduce it, or any of the other issues?
comment:7 by , 17 years ago
Resolution: | → fixed |
---|---|
Status: | new → closed |
According to Andreas, the problem is fixed since hrev25249.
Probably unrelated, when copying large folders (~12000 files) from this 4.0 GB volume to a new Haiku-DriveSetup-created 12.0 GB volume I pretty reproducibly get
vm_page_fault
s somewhere beyond 8000 files (gcc2). No problems in BeOS though.