wiki:FutureHaiku/BFSIssues

Issues with BFS

BFS has several issues that can only be solved by breaking binary compatibility. To solve all of them, it's a better idea to start a new file system (based on the BFS sources) than to try to remain compatible.

To mention the ones I currently remember without further investigation:

  • The space of files that were deleted, but still in use is lost after a system crash (checkfs fixes that, so it's a good idea to run it from time to time after crashes).
  • There is a global block bitmap that could be solved better using some tree.
  • The index is not written for user queries (looking up wildcards can be pretty expensive).
  • There is a single fixed size log which theoretically prevents some operations from happening (those that doesn't fit into the log anymore).
  • Writing meta data is essentially single threaded (as there is only one log).
  • The double indirect block range is implemented in a stupid way which makes it almost useless (it could cover a lot more space easily).
  • No support for hard links.
  • One inode covers a whole block, and it only embeds attribute data into that block (could do the same with file data).
  • Potentially scatters inodes over the whole disk (somewhat balanced by its allocation policies) that aren't optimal for directory reading speed.
  • The B+trees fragment over time.
  • No data correctness facilities (ie. data cannot be placed in the log, and there are no checksums either).
  • The indexes hurt performance as they are updated synchronously.
  • The Values in B+Tree nodes are not 64bit aligned, which is a performance problem on some architectures.
  • No direct way to get the size of folders without a recursive scan (see #18567)
Last modified 12 months ago Last modified on Nov 30, 2023, 3:58:18 AM
Note: See TracWiki for help on using the wiki.