DecimalFormat df = new DecimalFormat( "#0" );
String fmt = df.format( 11111111E15f );
With a little bit of digging, I figured out what's going on. The DecimalFormat class doesn't have a method with a format( float ) signature. So what's happening is that my float parameter is being cast to a double, and the method with float( double ) signature is being called. Inside that method, it assumes that there are 53 bits of significance in the mantissa, not the 24 actually present in the float – so it emits digits to cover all that.
I'm really surprised that was done in the first place, and even more surprised that the Java folks haven't fixed it. That could lead to some mighty misleading results!
My class won't do that :)