Why are there differing definitions of INT64_MIN? And why do they behave differently?
Posted
by
abelenky
on Stack Overflow
See other posts from Stack Overflow
or by abelenky
Published on 2012-06-29T23:18:06Z
Indexed on
2012/06/30
3:16 UTC
Read the original article
Hit count: 207
The stdint.h
header at my company reads:
#define INT64_MIN -9223372036854775808LL
But in some code in my project, a programmer wrote:
#undef INT64_MIN
#define INT64_MIN (-9223372036854775807LL -1)
He then uses this definition in the code.
The project compiles with no warnings/errors.
When I attempted to remove his definition and use the default one, I got:
error: integer constant is so large that it is unsigned
The two definitions appear to be equivalent.
Why does one compile fine and the other fails?
© Stack Overflow or respective owner