Why can't decimal numbers be represented exactly in binary?

Posted by Barry Brown on Stack Overflow See other posts from Stack Overflow or by Barry Brown
Published on 2009-07-06T20:17:12Z Indexed on 2011/01/03 14:54 UTC
Read the original article Hit count: 271

Filed under:
|

There have been several questions posted to SO about floating-point representation. For example, the decimal number 0.1 doesn't have an exact binary representation, so it's dangerous to use the == operator to compare it to another floating-point number. I understand the principles behind floating-point representation.

What I don't understand is why, from a mathematical perspective, are the numbers to the right of the decimal point any more "special" that the ones to the left?

For example, the number 61.0 has an exact binary representation because the integral portion of any number is always exact. But the number 6.10 is not exact. All I did was move the decimal one place and suddenly I've gone from Exactopia to Inexactville. Mathematically, there should be no intrinsic difference between the two numbers -- they're just numbers.

By contrast, if I move the decimal one place in the other direction to produce the number 610, I'm still in Exactopia. I can keep going in that direction (6100, 610000000, 610000000000000) and they're still exact, exact, exact. But as soon as the decimal crosses some threshold, the numbers are no longer exact.

What's going on?

Edit: to clarify, I want to stay away from discussion about industry-standard representations, such as IEEE, and stick with what I believe is the mathematically "pure" way. In base 10, the positional values are:

... 1000  100   10    1   1/10  1/100 ...

In binary, they would be:

... 8    4    2    1    1/2  1/4  1/8 ...

There are also no arbitrary limits placed on these numbers. The positions increase indefinitely to the left and to the right.

© Stack Overflow or respective owner

Related posts about math

Related posts about floating-point