即使精度超过可用数字,格式也不会打印所有两位数

format not printing all double digits even when precision exceeds available digits

我无法解决这个问题。

似乎 JDK 的数字格式化例程无法打印两位数,即使在给定足够精度的字段时也是如此。

特别是,我不明白为什么下面程序的第二行输出是:

b.doubleValue() => 34981.29000000000000000000000000000000000000000000000000

我认为打印的值应该是 34981.29000000000087311491370201110839843750000000000000

你能帮我理解为什么会这样吗:

PrecisionLoser.java

import java.math.BigDecimal;

public class PrecisionLoser {
    public static void main(final String[] args) {
        final BigDecimal b = new BigDecimal("34981.29");
        System.out.printf("b                => %.50f%n", b);
        System.out.printf("b.doubleValue()  => %.50f%n", b.doubleValue());
        System.out.printf("b.floatValue()   => %.50f%n", b.floatValue());
        System.out.printf("Double.MIN_VALUE =>     %.50f%n", Double.MIN_VALUE);
        System.out.println("-");
        final double d = 34981.2900000000008731149137020111083984375;
        System.out.printf("d                               => %.50f%n", d);
        System.out.printf("new BigDecimal(d)               => %.50f%n", new BigDecimal(d));
        System.out.printf("new BigDecimal(b.doubleValue()) => %.50f%n", new BigDecimal(b.doubleValue()));
        System.out.printf("d == b.doubleValue()            => %b%n", d == b.doubleValue());
        final double e = 34981.29;
        System.out.printf("d == e                          => %b%n", d == e);
        System.out.println("-");
        System.out.printf("Double.doubleToLongBits(b.doubleValue()) => 0x%16x%n", Double.doubleToLongBits(b.doubleValue()));
        System.out.printf("Double.doubleToLongBits(d)               => 0x%16x%n", Double.doubleToLongBits(d));
        System.out.printf("Double.doubleToLongBits(e)               => 0x%16x%n", Double.doubleToLongBits(e));
    }
}

输出

$ javac PrecisionLoser.java
$ java PrecisionLoser
b                => 34981.29000000000000000000000000000000000000000000000000
b.doubleValue()  => 34981.29000000000000000000000000000000000000000000000000
b.floatValue()   => 34981.28906250000000000000000000000000000000000000000000
Double.MIN_VALUE =>     0.00000000000000000000000000000000000000000000000000
-
d                               => 34981.29000000000000000000000000000000000000000000000000
new BigDecimal(d)               => 34981.29000000000087311491370201110839843750000000000000
new BigDecimal(b.doubleValue()) => 34981.29000000000087311491370201110839843750000000000000
d == b.doubleValue()            => true
d == e                          => true
-
Double.doubleToLongBits(b.doubleValue()) => 0x40e114a947ae147b
Double.doubleToLongBits(d)               => 0x40e114a947ae147b
Double.doubleToLongBits(e)               => 0x40e114a947ae147b

用调试器看转换过程,我想发生的事情是double先转换成string,忽略了精度。然后,调整字符串以匹配指定的精度(在末尾用 0 填充,或用舍入截断 - 请参阅 sun.misc.FormattedFloatingDecimal 的 applyPrecision)。

而且,如果重新解析,原始转换似乎使用(最小?)将产生原始双精度值的小数位数。