In Java, when you do
int b;
b = b + 1.0;
You get a possible loss of precision error. But why is it that if you do
int b;
b += 1.0;
There isn't any error?
In Java, when you do
int b;
b = b + 1.0;
You get a possible loss of precision error. But why is it that if you do
int b;
b += 1.0;
There isn't any error?