Opened 4 years ago

Last modified 4 years ago

#15828 reopened enhancement

Add automatic path remapping in packages in secondary architectures

Reported by: X512 Owned by: bonefish
Priority: normal Milestone: Unscheduled
Component: File Systems/packagefs Version: R1/Development
Keywords: Cc:
Blocked By: Blocking:
Platform: All

Description (last modified by X512)

This is hrev53967.

When 32 bit appliaction support will be added to x86_64 Haiku (https://review.haiku-os.org/c/haiku/+/427), where will be 2 senondary architectures x86 and x86_gcc2. So primary x86_gcc2 architecture on 32 bit Haiku will become secondary architecture on 64 bit Haiku and installing of x86_gcc2 packages become impossible.

Possible solution to this problem is perform automatic remapping of package patch to secondary architectures. For example if some x86_gcc2 package has lib/libtest.so, it will be remapped to lib/x86_gcc2/libtest.so on 64 bit Haiku. After introducing this feature secondary architectures suffixes can be dropped in HaikuPorts and x86 or x86_gcc2 architecture can be directly used.

Path remapping will also simplify cross compiling: it will became possible to install any achitecture package on any Haiku architecture, if architecure don't match path remapping will be used.

Targets of path remapping are bin, lib, develop/lib, develop/headers and may be others. Some options for controlling remapping are required, for example there are platform-independent and platform-dependent headers.

Change History (9)

comment:1 by X512, 4 years ago

Description: modified (diff)

comment:2 by X512, 4 years ago

Description: modified (diff)

comment:3 by waddlesplash, 4 years ago

Resolution: invalid
Status: newclosed

This is simply not possible as most ported software winds up hard-coding some component of the path in files or libraries themselves. Some of the time, at least the PREFIX is not hard-coded, but then the path to lib, bin, etc. is.

When we add 32-bit secondary architectures to x86_64, we will simply have to rebuild x86_gcc2 packages as being for the secondary architecture (if at that point we even need x86_gcc2, etc.) We already did this before for the x86/x86_gcc2 hybrids, so we can do it again for x86_64/x86_gcc2 hybrids if necessary.

in reply to:  3 comment:4 by X512, 4 years ago

Replying to waddlesplash:

This is simply not possible as most ported software winds up hard-coding some component of the path in files or libraries themselves.

This is bad practice and it should be avoided. As I understand hpkg packages are also intended for native software, not only ports. It is also possible to redirect file system requests for secondary architecture process like Windows does for system32.

When we add 32-bit secondary architectures to x86_64, we will simply have to rebuild x86_gcc2 packages as being for the secondary architecture

It will cause having different packages with same contents, but a bit different paths, that is weird.

Version 0, edited 4 years ago by X512 (next)

comment:5 by X512, 4 years ago

Having 32 bit package for 32 bit system and for 64 bit system is also very confusing. I never seen this for Windows/Linux/MacOS X application binary distributions.

comment:6 by waddlesplash, 4 years ago

Linux distros avoid this by having *all* libs and included in arch subdirs, which has other downsides we decided not to go with. They don't, however, have separate bin dirs usually, which leads to problems like "update alternatives".

The secondary architecture is really only there for compatibility, not for primary usage.

in reply to:  6 comment:7 by X512, 4 years ago

Replying to waddlesplash:

The secondary architecture is really only there for compatibility, not for primary usage.

Compatibility is for primary usage, not for fun. In theory, software once made should be possible to run forever in originally packaged form. It is required for dealing with different versions of software components and also software has value as humanity cultural heritage especially games. Linux and Mac OS X approach to compatibility is wrong and should be avoided.

Last edited 4 years ago by X512 (previous) (diff)

comment:8 by waddlesplash, 4 years ago

Software should be runnable forever, yes; but there's no mandate for *how* it should be. We are not going to keep GCC2 and the BeOS ABI around forever; it's already pretty annoying to maintain them. To be quite honest, if you want to run BeOS games, running a BeOS VM is probably the best way to go about it. But we definitely do not intend to keep binary compatibility with BeOS forever.

comment:9 by pulkomandy, 4 years ago

Resolution: invalid
Status: closedreopened

Linux distros avoid this by having *all* libs and included in arch subdirs, which has other downsides we decided not to go with.

Cication needed, what are the downsides? I can't think of anything obvious.

In theory, software once made should be possible to run forever in originally packaged form.

I would love to live in Theory, because in Theory, everything goes well. There are many things that get in the way of achieving this, and at some point, alternate solutions end up being better. For BeOS apps, these include:

  • Getting the source to the app and patching it,
  • Running it in a virtual machine,
  • Rewriting a replacement (or better) application,
  • Patching the binaries.

All of these would be unrealistic for Mac OS X or Windows because they have so many applications. The first one is essentially what Linux does, with the Linux distributions putting all their resources and efforts into it. In our case, there are not that many BeOS applications, which mean we can afford in some cases to be less compatible and patch the applications instead (ideally we would detect the binary and live-patch it or something).

This is not to say we should completely give up on compatibility, but at some point we have to drop the oldest things. Even Microsoft has dropped support for 16bit applications (Win16) and DOS applications in modern Windows.

What we should aim for is a platform that supports and runs existing binaries long enough that distributing binaries is a reasonable way to do thing. How long is long enough? It depends on the ecosystem.

BeOS is not perfect in terms of future-proofing, they made a quite good try, but there are limitations to their approach. We could set up ambitious plans to keep things working at all costs, for example, implement the gcc2 ABI in clang so we can use a modern compiler to build support libraries for BeOS apps. But it is worth the effort? Or will it end up being simpler to recreate the few affected apps?

I see the BeOS comaptibility as a way to train ourselves on what binary compatibility means, and to learn how to forward-proof our new developments. So, I think we should make best-effort compatibility with BeOS, for as long as reasonably possible, and learn from the limitations of what they did, and see if we can avoid hitting the same problems.

--

Back on-topic now. Should this be done in packagefs? Or in the runtime_loader? Or maybe both need to collaborate? Do we need a new package format to make this simpler? Change to the buildtools?

Maybe it will end up being moved to Haiku R2 because it can't be done for BeOS apps but only for packaged Haiku R1 ones. But that is no reason for closing the ticket, there is a valid use case here and things to talk about.

Note: See TracTickets for help on using tickets.