java.lang.Double 实现不一致(Oracle JDK 1.8)?

Inconsistency in java.lang.Double implementation (Oracle JDK 1.8)?

我查看了 java.lang.Double class 的执行情况。 NaN的值是0x7ff8000000000000L的指定值。 public static final double NaN 字段设置为 0.0d / 0.0 如果 JVM 确实以这种方式实现它,则其计算结果应为 0x7ff8000000000000L

  1. 为什么选择这个值 (0x7ff8000000000000L)?该值有什么特别之处吗(例如它的位掩码)?

  2. 为什么将字段隐式设置为该值并取决于 0.0d / 0.0 操作的底层实现,而静态方法 public static long doubleToLongBits(double value) 将值显式设置为 0x7ff8000000000000L 用于 NaN 参数?隐式设置它不是更安全吗,因为 0.0d / 0.0 高度依赖于 JVM 的实现并且理论上可以更改(很可能永远不会)?

POSITIVE_INFINITYNEGATIVE_INFINITY 也是如此。字段被隐式设置为它们的值,但一些方法使用显式指定的值。这背后有什么原因吗?

感谢您每天帮助我学习新知识 :-)。

The public static final double NaN field is set to 0.0d / 0.0 which should evaluate to 0x7ff8000000000000L if the JVM does implement it that way.

否:根据 the language spec:

结果为 NaN

Division of a zero by a zero results in NaN

0x7ff8000000000000Llong,不是 double,因此不能直接用作字段初始值设定项。

The documentation of Double.NaN 确实声明其值“等于 return 由 Double.longBitsToDouble(0x7ff8000000000000L) 编辑的值。”但是,0.0d / 0.0 优先用于初始化字段,因为它是编译时常量值,而方法调用不是。

(的无耻外挂)


Why was this value (0x7ff8000000000000L) chosen?

JLS Sec 4.2.3 所述:

IEEE 754 allows multiple distinct NaN values for each of its single and double floating-point formats. While each hardware architecture returns a particular bit pattern for NaN when a new NaN is generated, a programmer can also create NaNs with different bit patterns to encode, for example, retrospective diagnostic information.

For the most part, the Java SE Platform treats NaN values of a given type as though collapsed into a single canonical value, and hence this specification normally refers to an arbitrary NaN as though to a canonical value.

Double.longBitsToDouble方法必须return一个值,所以这是他们选择的值return。