Opened 17 years ago

Closed 17 years ago

Last modified 17 years ago

#1100 closed enhancement (invalid)

configure should check for prerequisites

Reported by: ekdahl Owned by: bonefish
Priority: low Milestone: R1
Component: Build System Version: R1/pre-alpha1
Keywords: Cc:
Blocked By: Blocking:
Platform: All

Description

When building under other platforms than BeOS compatible ones, there should be checks for bison, flex and makeinfo (required to build latest gcc4, included in texinfo). There might be more, but these are the ones I remember. Many people ask these kinds of questions on irc.

Change History (7)

comment:1 by jackburton, 17 years ago

Indeed, I fell into the makeinfo problem myself on Ubuntu...

comment:2 by jonas.kirilla, 17 years ago

Does the build system already check if the filesystem is reasonably sized, and whether or not there is a risk of running out of inodes?

in reply to:  2 ; comment:3 by bonefish, 17 years ago

Priority: normallow
Resolution: invalid
Status: newclosed

Replying to jonas.kirilla:

Does the build system already check if the filesystem is reasonably sized, and whether or not there is a risk of running out of inodes?

I suppose this is meant as a joke. If not, I'm really puzzled. :-)

Regarding the missing tools: I think this is mostly a problem of less experienced users. Stefano, I'm sure you could interpret the resulting "/bin/sh: makeinfo: file or directory not found" (or similar) error message easily enough and fix the problem by installing the missing tool. Also the required tools are all pretty standard development tools, nothing fancy. If you're building a package from source from time to time you'll definitely have encountered all of them already.

Anyway, I've added the list of required tools (I hope I've listed all of them -- if not, please tell me) to the ReadMe.cross-compile. I think this is just as good as adding checks to the configure script. I close this ticket.

in reply to:  3 comment:4 by ekdahl, 17 years ago

Anyway, I've added the list of required tools (I hope I've listed all of them -- if not, please tell me) to the ReadMe.cross-compile. I think this is just as good as adding checks to the configure script. I close this ticket.

That's definitely a big improvement, and hopefully "kills" most questions about that. Thank you!

comment:5 by jonas.kirilla, 17 years ago

No joke. Is it really that ridiculous? :]

I've seen people report that they've run out of inodes when building large projects on UFS (ext2 is similar, IIRC) and the Haiku build isn't small anymore.

I don't know if the configure script is a good place for it, but it might still be a good idea in theory to have the build try to estimate whether or not the host system has a fair chance at finishing the task at hand. Why spend hours of CPU and disk, and then fail, if it was already easily known before-hand! If it's not easily known before-hand, then I suppose this feature request is stillborn, but if a disk is obviously too small or it's already low on inodes and this can be measured? (I'm guessing it can be.)

Anyway, perhaps it's not worth the effort.

comment:6 by mmu_man, 17 years ago

http://www.hmug.org/man/1/df.php df -i maybe, for linux at least ?

in reply to:  5 comment:7 by bonefish, 17 years ago

Replying to jonas.kirilla:

No joke. Is it really that ridiculous? :]

I've seen people report that they've run out of inodes when building large projects on UFS (ext2 is similar, IIRC) and the Haiku build isn't small anymore.

I actually didn't think that there are any current FSs (no, this definitely doesn't include UFS) that still have an inode limit (other than the number of free blocks). ext3 does have a specifiable limit, the default being 1 file per block (for 4KB blocks at least), AFAIK. ReiserFS doesn't even have inodes.

I don't know if the configure script is a good place for it, but it might still be a good idea in theory to have the build try to estimate whether or not the host system has a fair chance at finishing the task at hand. Why spend hours of CPU and disk, and then fail, if it was already easily known before-hand!

Er, building a complete Haiku image (not including the buildtools) takes a little less than 15 min on my desktop machine, which is by no means current (P4 3.2 GHz). Besides, the time is not lost. You can simply free some disk space and jam again, continuing at the point where you stopped.

If it's not easily known before-hand, then I suppose this feature request is stillborn, but if a disk is obviously too small or it's already low on inodes and this can be measured? (I'm guessing it can be.)

Anyway, perhaps it's not worth the effort.

I don't think it is. Adding the checks is one thing, but since Haiku keeps growing one would also have to adjust the numbers from time to time.

Anyway, it is way more likely to run out of inodes or disk space while checking out the haiku and buildtools sources than while building. A bit of statistics, after running a "jam -q haiku-vmware-image":

total number of files: generated: 15025 haiku sources (without generated): 33639 buildtools sources: 27941 ratio sources/generated: 4.1

total disk space: generated: 433029 KB haiku sources (without generated): 867145 KB buildtools sources: 1046479 KB ratio sources/generated: 4.77

BTW, that also shows, that removing the buildtools sources after building the tools gives you more than enough room for building the image.

Note: See TracTickets for help on using tickets.