Would making plain int 64-bit break a lot of reasonable code?
Posted
by
R..
on Stack Overflow
See other posts from Stack Overflow
or by R..
Published on 2010-12-30T01:09:24Z
Indexed on
2010/12/30
1:54 UTC
Read the original article
Hit count: 254
Until recently, I'd considered the decision by most systems implementors/vendors to keep plain int
32-bit even on 64-bit machines a sort of expedient wart. With modern C99 fixed-size types (int32_t
and uint32_t
, etc.) the need for there to be a standard integer type of each size 8, 16, 32, and 64 mostly disappears, and it seems like int
could just as well be made 64-bit.
However, the biggest real consequence of the size of plain int
in C comes from the fact that C essentially does not have arithmetic on smaller-than-int
types. In particular, if int
is larger than 32-bit, the result of any arithmetic on uint32_t
values has type signed int
, which is rather unsettling.
Is this a good reason to keep int
permanently fixed at 32-bit on real-world implementations? I'm leaning towards saying yes. It seems to me like there could be a huge class of uses of uint32_t
which break when int
is larger than 32 bits. Even applying the unary minus or bitwise complement operator becomes dangerous unless you cast back to uint32_t
.
Of course the same issues apply to uint16_t
and uint8_t
on current implementations, but everyone seems to be aware of and used to treating them as "smaller-than-int
" types.
© Stack Overflow or respective owner