Java vs. C#:BigInteger 十六进制字符串产生不同的结果?
Java vs. C#: BigInteger hex string yields different result?
问题:
Java中的代码:
BigInteger mod = new BigInteger("86f71688cdd2612ca117d1f54bdae029", 16);
产生(在 java 中)数字
179399505810976971998364784462504058921
但是,当我使用 C# 时,
BigInteger mod = BigInteger.Parse("86f71688cdd2612ca117d1f54bdae029", System.Globalization.NumberStyles.HexNumber); // base 16
我没有得到相同的数字,我得到:
-160882861109961491465009822969264152535
但是,当我直接从十进制创建数字时,它起作用了
BigInteger mod = BigInteger.Parse("179399505810976971998364784462504058921");
我尝试将十六进制字符串转换为字节数组并将其反转,并从反转数组创建一个双整数,以防它是具有不同字节序的字节数组,但这没有帮助...
我在将Java-Code转C#的时候也遇到了如下问题:
Java
BigInteger k0 = new BigInteger(byte[]);
为了在 C# 中获得相同的数字,我必须反转数组,因为在 biginteger 实现中的 Endianness 不同
C# 等价物:
BigInteger k0 = new BigInteger(byte[].Reverse().ToArray());
这是 MSDN 关于 BigInteger.Parse
的说法:
If value is a hexadecimal string, the Parse(String, NumberStyles)
method interprets value as a negative number stored by using two's complement representation if its first two hexadecimal digits are greater than or equal to 0x80. In other words, the method interprets the highest-order bit of the first byte in value as the sign bit. To make sure that a hexadecimal string is correctly interpreted as a positive number, the first digit in value must have a value of zero. For example, the method interprets 0x80 as a negative value, but it interprets either 0x080 or 0x0080 as a positive value.
所以,在解析的十六进制数前加一个0
,强制进行无符号解释。
至于在 Java 和 C# 之间往返一个由字节数组表示的大整数,我建议不要这样做,除非你真的必须这样做。但是,如果您解决字节序问题,这两种实现 都会 使用兼容的二进制补码表示。
The individual bytes in the array returned by this method appear in little-endian order. That is, the lower-order bytes of the value precede the higher-order bytes. The first byte of the array reflects the first eight bits of the BigInteger
value, the second byte reflects the next eight bits, and so on.
Returns a byte array containing the two's-complement representation of this BigInteger
. The byte array will be in big-endian byte-order: the most significant byte is in the zeroth element.
问题:
Java中的代码:
BigInteger mod = new BigInteger("86f71688cdd2612ca117d1f54bdae029", 16);
产生(在 java 中)数字
179399505810976971998364784462504058921
但是,当我使用 C# 时,
BigInteger mod = BigInteger.Parse("86f71688cdd2612ca117d1f54bdae029", System.Globalization.NumberStyles.HexNumber); // base 16
我没有得到相同的数字,我得到:
-160882861109961491465009822969264152535
但是,当我直接从十进制创建数字时,它起作用了
BigInteger mod = BigInteger.Parse("179399505810976971998364784462504058921");
我尝试将十六进制字符串转换为字节数组并将其反转,并从反转数组创建一个双整数,以防它是具有不同字节序的字节数组,但这没有帮助...
我在将Java-Code转C#的时候也遇到了如下问题:
Java
BigInteger k0 = new BigInteger(byte[]);
为了在 C# 中获得相同的数字,我必须反转数组,因为在 biginteger 实现中的 Endianness 不同
C# 等价物:
BigInteger k0 = new BigInteger(byte[].Reverse().ToArray());
这是 MSDN 关于 BigInteger.Parse
的说法:
If value is a hexadecimal string, the
Parse(String, NumberStyles)
method interprets value as a negative number stored by using two's complement representation if its first two hexadecimal digits are greater than or equal to 0x80. In other words, the method interprets the highest-order bit of the first byte in value as the sign bit. To make sure that a hexadecimal string is correctly interpreted as a positive number, the first digit in value must have a value of zero. For example, the method interprets 0x80 as a negative value, but it interprets either 0x080 or 0x0080 as a positive value.
所以,在解析的十六进制数前加一个0
,强制进行无符号解释。
至于在 Java 和 C# 之间往返一个由字节数组表示的大整数,我建议不要这样做,除非你真的必须这样做。但是,如果您解决字节序问题,这两种实现 都会 使用兼容的二进制补码表示。
The individual bytes in the array returned by this method appear in little-endian order. That is, the lower-order bytes of the value precede the higher-order bytes. The first byte of the array reflects the first eight bits of the
BigInteger
value, the second byte reflects the next eight bits, and so on.
Returns a byte array containing the two's-complement representation of this
BigInteger
. The byte array will be in big-endian byte-order: the most significant byte is in the zeroth element.