Recently I came across a piece of code that looked similar to the following:
#include
int main() {
unsigned int a, b;
a = 20;
b = 30;
double c = double(a - b);
printf("%lf\n", c);
}
One may, at first glance assume that this would spit out the required -10.0, however this is not the case. It will print out a huge number something like 4294967286.0 !
The reason for this is that the datatype of a and b are unsigned types. And at least with the GCC compiler (or for that matter, I assume any compiler), the intermediate variable that is generated would be an unsigned type, resulting in "mis-interpretation" of the sign bit when the casting operation is performed.
In other words, when a subtraction is performed on unsigned quantities, you should better be careful to see what actually is happening!