Opened 16 years ago

Closed 6 years ago

#3117 closed bug (fixed)

svn co while emptying trash makes trash in checked out data

Reported by: Adek336 Owned by: nobody
Priority: normal Milestone: R1
Component: Drivers/Network Version: R1/pre-alpha1
Keywords: Cc:
Blocked By: Blocking:
Platform: All

Description

svn co svn://svn.berlios.de/haiku/haiku/trunk /haiku-dane/haiku

while having another copy of the haiku trunk deleted from /haiku-dane

In effect, garbage can be found in the newly checked out data, for example, "svn stat" complains about garbage in svn control files.

I have been able to reproduce it on a via_rhine ethernet adaptor and not yet on a broadcom adaptor, it may be perhaps linked to long mbuf chains, also see #2840.

Change History (7)

comment:1 by Adek336, 16 years ago

So the data that were written to the disk don't get corrupt. Only the new data that are only now being written to disk get corrupt.

On a BFS partition, when you have say 30 files, each file 20 MiB, do a copy of them, wait a few seconds after Tracker has finished copying and press F12 and type "reb<cr>". After reboot, you can see the just copied files, although, they are corrupt. I suppose that BFS has got journaling and the newly copied files after a crash should be either present and not corrupt, or not exist at all.

in reply to:  1 comment:2 by anevilyak, 16 years ago

Replying to Adek336:

On a BFS partition, when you have say 30 files, each file 20 MiB, do a copy of them, wait a few seconds after Tracker has finished copying and press F12 and type "reb<cr>". After reboot, you can see the just copied files, although, they are corrupt. I suppose that BFS has got journaling and the newly copied files after a crash should be either present and not corrupt, or not exist at all.

You seem to be under a slight misconception here: on the vast vast majority of filesystems, journalling guarantees nothing whatsoever with respect to the file data; all that is journalled is metadata operations, in other words, the journal is used to guarantee that the FS itself is internally consistent, i.e. if a file is created/destroyed/resized, it is guaranteed that all the underlying metadata operations (volume bitmap, B+Tree modifications, etc.) happened completely or not at all. The actual file block data is not journalled or guaranteed in any way, shape or form. The only filesystems that do this that I'm aware of are log-structured filesystems such as ZFS.

comment:3 by axeld, 16 years ago

Indeed, BFS does not log file data, only meta data (it even has a "log file" flag (usually used for symlinks), but that is not working on our BFS anymore (since it has been ported to Haiku's file system API).

comment:4 by axeld, 8 years ago

Owner: changed from axeld to nobody
Status: newassigned

comment:5 by waddlesplash, 6 years ago

Anyone seen this in the last 10 years?

comment:6 by diver, 6 years ago

I think we would have run into this if it was still the case.

comment:7 by diver, 6 years ago

Resolution: fixed
Status: assignedclosed
Note: See TracTickets for help on using tickets.