#3547 closed bug (fixed)
cc fails when compiling bfs_disk_system.cpp
Reported by: | Phoenix137 | Owned by: | bonefish |
---|---|---|---|
Priority: | normal | Milestone: | R1 |
Component: | Build System | Version: | R1/pre-alpha1 |
Keywords: | Cc: | jason.wrinkle@…, fredrik.holmqvist@… | |
Blocked By: | Blocking: | ||
Platform: | x86-64 |
Description
Processor: AMD64 2.2Ghz x2
Host System: linux32
64-bit Ubuntu 8.10 (synaptic updated ~3-9-9)
on file: bfs_disk_system.cpp
error type: casting
Details:
bfs.h:375:
error: cast from ‘const small_data*’ to ‘fssh_addr_t’ loses precision error: cast from ‘const bfs_inode*’ to ‘fssh_addr_t’ loses precision
Terminal Output (note: same error seen when using jam... cc line below was lifted out of jam output messages to help isolate bug. All 'jamming' and compiling of a fresh svn done in linux32 subsystem.):
$ cc -c "src/add-ons/kernel/file_systems/bfs/bfs_disk_system.cpp" -O1 -Wall -Wno-trigraphs -Wno-ctor-dtor-privacy -Woverloaded-virtual -Wpointer-arith -Wcast-align -Wsign-compare -Wno-multichar -DBFS_SHELL -Wall -Wno-multichar -fno-rtti -D_ZETA_USING_DEPRECATED_API_=1 -D_ZETA_TS_FIND_DIR_=1 -DARCH_x86 -D_NO_INLINE_ASM -DINTEL -D_GNU_SOURCE -D_FILE_OFFSET_BITS=64 -DHAIKU_HOST_PLATFORM_LINUX -iquote build/user_config_headers -iquote build/config_headers -iquote src/tools/bfs_shell -iquote generated/objects/common/tools/bfs_shell -iquote generated/objects/linux/x86/common/tools/bfs_shell -iquote generated/objects/haiku/x86/common/tools/bfs_shell -iquote src/add-ons/kernel/file_systems/bfs -I headers/private/fs_shell -I headers/build/host/linux -o "generated/objects/linux/x86/release/tools/bfs_shell/bfs_disk_system.o" ;
In file included from src/add-ons/kernel/file_systems/bfs/bfs_disk_system.cpp:8:
src/add-ons/kernel/file_systems/bfs/bfs.h: In member function ‘bool small_data::IsLast(const bfs_inode*) const’:
src/add-ons/kernel/file_systems/bfs/bfs.h:375: error: cast from ‘const small_data*’ to ‘fssh_addr_t’ loses precision
src/add-ons/kernel/file_systems/bfs/bfs.h:375: error: cast from ‘const bfs_inode*’ to ‘fssh_addr_t’ loses precision
$
Source Notes:
file: ./headers/private/fs_shell/fssh_types.h :
#ifdef HAIKU_HOST_PLATFORM_64_BIT typedef uint64_t fssh_addr_t; #else typedef uint32_t fssh_addr_t; #endif
file: ./headers/build/.svn/text-base/BeOSBuildCompatibility.h.svn-base:#define HAIKU_HOST_PLATFORM_64_BIT
#ifdef x86_64 #define HAIKU_HOST_PLATFORM_64_BIT #endif
file: ./headers/build/BeOSBuildCompatibility.h:#define HAIKU_HOST_PLATFORM_64_BIT
#ifdef x86_64 #define HAIKU_HOST_PLATFORM_64_BIT #endif
file: src/add-ons/kernel/file_systems/bfs/bfs.h
...
struct bfs_inode;
struct small_data {
uint32 type; uint16 name_size; uint16 data_size; char name[0]; name_size long, followed by data
uint32 Type() const { return BFS_ENDIAN_TO_HOST_INT32(type); } uint16 NameSize() const { return BFS_ENDIAN_TO_HOST_INT16(name_size); } uint16 DataSize() const { return BFS_ENDIAN_TO_HOST_INT16(data_size); }
inline char *Name() const; inline uint8 *Data() const; inline uint32 Size() const; inline small_data *Next() const; inline bool IsLast(const bfs_inode *inode) const;
} _PACKED;
the file name is part of the small_data structure #define FILE_NAME_TYPE 'CSTR' #define FILE_NAME_NAME 0x13 #define FILE_NAME_NAME_LENGTH 1
class Volume;
#define SHORT_SYMLINK_NAME_LENGTH 144 length incl. terminating '\0'
struct bfs_inode {
int32 magic1; inode_addr inode_num; int32 uid; int32 gid; int32 mode; see sys/stat.h int32 flags; bigtime_t create_time; bigtime_t last_modified_time; inode_addr parent; inode_addr attributes; uint32 type; attribute type
int32 inode_size; uint32 etc; a pointer to the Inode object during construction
union {
data_stream data; char short_symlink[SHORT_SYMLINK_NAME_LENGTH];
}; int32 pad[4];
small_data small_data_start[0];
int32 Magic1() const { return BFS_ENDIAN_TO_HOST_INT32(magic1); } int32 UserID() const { return BFS_ENDIAN_TO_HOST_INT32(uid); } int32 GroupID() const { return BFS_ENDIAN_TO_HOST_INT32(gid); } int32 Mode() const { return BFS_ENDIAN_TO_HOST_INT32(mode); } int32 Flags() const { return BFS_ENDIAN_TO_HOST_INT32(flags); } int32 Type() const { return BFS_ENDIAN_TO_HOST_INT32(type); } int32 InodeSize() const { return BFS_ENDIAN_TO_HOST_INT32(inode_size); } bigtime_t LastModifiedTime() const { return BFS_ENDIAN_TO_HOST_INT64(last_modified_time); } bigtime_t CreateTime() const { return BFS_ENDIAN_TO_HOST_INT64(create_time); } small_data *SmallDataStart() { return small_data_start; }
status_t InitCheck(Volume *volume);
defined in Inode.cpp
} _PACKED;
...
Change History (16)
comment:1 by , 16 years ago
Cc: | added |
---|
comment:2 by , 16 years ago
comment:3 by , 16 years ago
Hmm. What in the command line/output is indicative of 64-bits? In any case, how does one compile properly on a 64-bit system. I read that all you had to do was build everything in the linux32 environment; are you saying I need to grab a 32-bit version of cc and then do some funky aliasing to make the jam scripts work?
follow-up: 5 comment:4 by , 16 years ago
As far as I know it should say "-m32" when it is compiling 32-bit code. configure script should add that automatically if your environment is recognized as 64-bit. You can also force configure to always add that flag by adding --use-32bit to configure command line.
follow-up: 6 comment:5 by , 16 years ago
comment:6 by , 16 years ago
Replying to mmadia:
Replying to monni:
You can also force configure to always add that flag by adding --use-32bit to configure command line.
That shouldn't be necessary with #3391
I know... I only listed it as "last resort" if auto-detecting fails for any reason. I'm really not familiar on how much "linux32" masquerades the system in each different distribution and as the flag still does exist, there is likely reason for it.
follow-up: 8 comment:7 by , 16 years ago
Component: | File Systems/BFS → Build System |
---|---|
Owner: | changed from | to
thanks for your help...
I tried the --use-32bit flag but the -m32 ccflag still wasn't added. I ended up editing the bootconfig file that configure spits out and editing the 32bit flag there from 0 to 1 and that got the -m32 option to show up in jam output. The reported error seemingly went away; however, compile is now failing looking for gnu/stubs-32.h (I think that is right file) and all I got in that directory is stubs-64.h. I also noticed when running ./configure --build-tools that the script does not think my compiler is a cross compiler (even though the last reported line says success).
It looks like compiling Haiku on this system, in this OS is too complex for me ATM. :( I am going to try to get a Haiku image running on another computer and then may try to compile in that environment. In the meantime, I suppose I will file this ticket under build system since I don't know what else to do with it.
Happy coding!
comment:8 by , 16 years ago
Replying to Phoenix137:
thanks for your help...
I tried the --use-32bit flag but the -m32 ccflag still wasn't added. I ended up editing the bootconfig file that configure spits out and editing the 32bit flag there from 0 to 1 and that got the -m32 option to show up in jam output. The reported error seemingly went away; however, compile is now failing looking for gnu/stubs-32.h (I think that is right file) and all I got in that directory is stubs-64.h.
That means you are missing 32-bit development packages. Not all distributions have them installed by default.
comment:9 by , 16 years ago
Cc: | added |
---|
Seeing this as well nowadays. I wonder if this became an issue with the automatic --use-32bit detection.
I've always used the following before:
setarch linux32 ../configure --use-32bit --use-gcc-pipe --build-cross-tools ../../buildtools/ --alternative-gcc-output-dir ../generated-gcc4
comment:10 by , 16 years ago
Yes, I can confirm this is happening and I was able to partially track it down: What happens is that when yo are in a 64 bit host, the --use-32bit is being completely ignored even if you explicitly set it. A workaround is to edit your "generated/build/BuildConfig" file and change:
HAIKU_HOST_USE_32BIT ?= "0" ;
to:
HAIKU_HOST_USE_32BIT ?= "1" ;
This will fix the build.
comment:11 by , 16 years ago
Sounds like the configure script needs another patch to not auto-detect the need for that setting if it's explicitly set. This changed in the recent change to detect the environment as seen here:
http://dev.haiku-os.org/browser/haiku/trunk/configure#L352
Also, perhaps there should be more intelligence in the auto-detection to handle situations where linux32 has been used (FWIW, I had always assumed the use of linux32 prevented the need for that flag, but I guess not!)
(Note: is "Platform" in Trac supposed to refer to the target platform or the host platform? I had always assumed target)
comment:13 by , 16 years ago
A oneliner would be to change
use_32bit=0 ;;
to
set_default_value use32bit 0 ;;
Those like me that uses setarch linux32 can then use --use-32bit
comment:14 by , 16 years ago
This should be resolved with changeset:30779 , which removed the incomplete 64bit host detection. "gcc-multilib" and "g++-multilib" need to be installed and then linux32 configure --use-32bit <options>
should work as expected.
comment:15 by , 16 years ago
Resolution: | → fixed |
---|---|
Status: | new → closed |
Lacking a 64 bit Linux I can't test it myself. Trusting that Matt is right. Please reopen, if there's still a problem.
comment:16 by , 15 years ago
Platform: | x64 → x86-64 |
---|
I would say this is not a bug. It is known fact that Haiku doesn't support 64-bit compilers. 32-bit "subsystem" just changes the identification string returned by "uname", but doesn't actually change the compiler. Compiler command line and output suggest it is still using 64-bit compiler.