When you store whole numbers in binary you usually have a choice of signed or unsigned. The signed version allows negative numbers but at the cost of the range of positive values possible.
For example a signed char allows values -128 to +127, but an unsigned char allows values 0 to 255.
If you compare them, using ==, !=, >, or < for example, the operation converts the signed value to an unsigned value and then compares.
signed int a = -1;
unsigned int b = 1;
if (a > b)
if (b > a)
This print a>b even though a is -1 and b is 1!
This is because -1, converted to an unsigned value, is a big number, in fact the biggest an unsigned int can be.
What pisses me off is that, even when C was invented, the code to make the comparison work would have been one check of one bit extra. Basically, whatever the comparison, you just have to check the signed value is negative or not before making the comparison. If it is negative that means it is not equal to the unsigned value, and is smaller than the unsigned value, so whatever comparison you were doing is decided by the signed value being negative before going on to do the comparison as normal.
To me this would have been a far more logical behaviour than changing the value of the signed variable by making it unsigned.