Division to the nearest 1 decimal place without floating point math?
Posted
by John Sheares
on Stack Overflow
See other posts from Stack Overflow
or by John Sheares
Published on 2010-04-11T00:47:24Z
Indexed on
2010/04/11
0:53 UTC
Read the original article
Hit count: 291
I am having some speed issues with my C# program and identified that this percentage calculation is causing a slow down. The calculation is simply n/d * 100. Both the numerator and denominator can be any integer number. The numerator can never be greater than the denominator and is never negative. Therefore, the result is always from 0-100. Right now, this is done by simply using floating point math and is somewhat slow, since it's being calculated tens of millions of times. I really don't need anything more accurate than to the nearest 0.1 percent. And, I just use this calculated value to see if it's bigger than a fixed constant value. I am thinking that everything should be kept as an integer, so the range with 0.1 accuracy would be 0-1000. Is there some way to calculate this percentage without floating point math?
© Stack Overflow or respective owner