Opened 5 years ago

Closed 4 years ago

#15370 closed enhancement (fixed)

[HaikuDepot] Load and Unload Icons On Demand

Reported by: apl-haiku Owned by: stippi
Priority: normal Milestone: R1/beta3
Component: Applications/HaikuDepot Version: R1/Development
Keywords: Cc:
Blocked By: Blocking:
Platform: All

Description

A significant part of the startup process of HaikuDepot is the loading of all of the icons for the packages present in the system's repository. The runtime memory overhead of all of the icons appears to be circa 8MB.

The current approach was probably sound to start with when there were few packages, but now the quantity of packages has grown the cost of loading all of the icons is no longer trivial. A better approach would be to load the icons 'on demand' from some sort of an LRU cache.

Situations

There are two situations where packages' icons are required.

The first is in the main list of packages (either "all" or "featured"). Here the logic is controlled by the class PackageListView which is subclassing from BColumnListView (private).

The second is in the lower portion of the window where packages' details are displayed. The icon there is shown next to the name in the top left corner. The class involved here is TitleView (see PackageInfoView.cpp).

Solution

An LRU cache is used that holds a fixed number of icons; say 100 or so. Possibly the class LocalIconStore could be modified to suit this additional functionality.

If the user scrolls fast through the list of packages then the icons would not be loaded at this high speed of scrolling. If the scrolling slows so that an icon were to be visible for > 1s or so then the icon should be loaded and should be displayed once it were loaded.

For the packages' details, the icon load should be initiated as soon as the package details are displayed. Once the icon is loaded then it should be displayed.

Change History (8)

comment:1 by diver, 5 years ago

Component: - GeneralApplications/HaikuDepot
Owner: changed from nobody to stippi
Type: bugenhancement

comment:2 by pulkomandy, 5 years ago

In addition to that, the icons are currently extracted each as a single file, which wastes a lot of disk space (at least an inode and a data block must be allocated for each file, and HVIF icons are very small files so they will typically not fill one full block of disk data).

I would consider a different storage format for the cache. Maybe an archived BMessage? We could more efficiently store the data as key/value pairs there (key being the package name, and value the hvif icon data).

comment:3 by apl-haiku, 5 years ago

This is a good point, but storing them in a BMessage would mean that the serialized BMessage would need to be loaded into memory in its entirety and this would eat lots of memory still.

Another problem with a single BMessage as storage media is that the gzip compressed tar-ball that comes down from the server contains all icons in the HaikuDepotServer (HDS) system because icons are applicable cross-repository in HDS. The Haiku desktop system may however only be configured with a subset of the repositories available in HDS. In this way, the in-memory BMessage could be holding many icon images that will never be applicable to the set of possible packages in any Haiku system's repositories; again using memory unnecessarily.

Another better approach I had thought of at the time would be to decompress the downloaded tar-ball but leave it as as tar-ball on disk. Next, create a mapping from package name to the byte-offset where the package's icons are located in the tar-ball. In this way the huge number of individual files could be avoided by keeping the index. This would still work well with the proposal of on-demand loading.

Only a meta-data JSON file would need to be extracted to flat-file in order to handle the data-freshness of the downloaded icons tar-ball --> HaikuDepot checks the JSON data to see if the server has newer icons to download.

comment:4 by pulkomandy, 5 years ago

Yes, that would work too. Another option is storing all icons as filesystem attributes of a single file, but I'm not sure how efficient that would be. I'm wondering if using a different compression format would help for direct access to specific files. Maybe zip is better structured for that? Or hpkg, but that would be abusing the format for something it's not designed to do.

comment:5 by apl-haiku, 5 years ago

I don't think compression is very important in this case because the PNGs are compressed already, the HVIFs are really small anyway and the tar-ball headers are also as small as they could be. I think tar-ball is a great format and there are already lots of tools to work with it.

comment:6 by pulkomandy, 5 years ago

Yes, what I had in mind is not compression, but a format with a built-in index allowing direct access to the files data (which I think tar doesn't provide, so we'll have to generate it ourselves). Compression would just make things more difficult here.

But yes, tar with separate index would do just as well and requires less changes.

comment:7 by diver, 4 years ago

Is it implemented in hrev54712?

comment:8 by apl-haiku, 4 years ago

Milestone: UnscheduledR1/beta3
Resolution: fixed
Status: newclosed

Thanks for the reminder @diver.

Note: See TracTickets for help on using tickets.