LONG_MAX on 64bit OSX

Some Sensorites have been known to be dicks

I came by an annoying problem when I was adding some asserts to my FBX to Athena tool. I was basically adding a check to my vertex buffer size to make sure it was not larger than a GLsizeiptr (defined as a long in iOS5 SDK).

I’ve always assumed chars to be 8, shorts to be 16, longs to be 32 and ints to be whatever the system wants it to be (most likely the register size for an atomic copy). However, my tool is setup using 64bit OSX project, and the long comes out as 8 bytes, so the value given by LONG_MAX is huge, while the int remains at 4 bytes.

I wouldn’t have even noticed that my assert would have never been hit if it had not been for the warning the IDE gave after writing the statement, “Comparison is always true due to limited range of data type”.

In the end I just replaced LONG_MAX with INT32_MAX (stdint.h). And after a bit of googling, I found this on wikipedia.

The sizes of short, int, and long in C/C++ are dependent upon the implementation of the language:

  • On older, 16-bit operating systems, int was 16-bit and long was 32-bit.
  • On 32-bit Unix, DOS, and Windows, int and long are 32-bits, while long long is 64-bits. This is also true for 64-bit processors running 32-bit programs.
  • On 64-bit Unix, int is 32-bits, while long and long long are 64-bits.

While this does sound kinda familiar to me, I do not think I’ve ever had a situation where it has appeared to me until now.