Overflow can cause some very subtle bugs. One that I've run into several times over the years is in a comparison, like that used to implement the java.lang.Comparable interface. You might see something like this:
@OverrideThat looks innocent enough, but it will fail – silently! – if an overflow or underflow occurs in that subtraction. This will occur, for example, if intField contained 2,000,000,000 and obj.intField contained -2,000,000,000. That comparison would return a negative number, which of course would be incorrect.
public int compareTo( T obj ) {
return intField - obj.intField;
}
The traditional way of handling overflow in Java (and many other languages) is to “upcast” to a larger precision integer to do the comparison. The implementation above could be revised to this:
@OverrideThis upcasts the int arguments to long, subtracts as long (thus avoiding the overflow), then turns the result into one of three int values (-1, 0, 1). It has one unarguable advantage: it works correctly with all int values.
public int compareTo( T obj ) {
long c = (long) intField - (long) obj.intField;
return (c == 0) ? 0 : (c > 0) ? 1 : -1;
}
However, I just discovered something that has somehow eluded me in over ten years of Java development (and may well be true in other programming languages, too): the relational comparison operators (<, >, <=, and >=) are apparently not implemented with a subtraction – instead, they appear to have access to the underlying hardware's knowledge of overflow and underflow. That means that we can implement the comparison method in a much cleaner way:
@OverrideThis also has the advantage of returning a pure signum function result (the second example did as well).
public int compareTo( T obj ) {
return (intField == obj.intField) ? 0 : (intField > obj.intField) ? 1 : -1;
}
No comments:
Post a Comment