为什么 Decimal() 接受字符串而不是浮点数时精度准确?在 Python

Why is the precision accurate when Decimal() takes in a string instead of float? in Python

为什么这些值不同,它们之间有何不同?

>>> from decimal import Decimal
>>> Decimal('0.1') + Decimal('0.1') + Decimal('0.1') - Decimal('0.3')
Decimal('0.0')

>>> Decimal(0.1) + Decimal(0.1) + Decimal(0.1) - Decimal(0.3)
Decimal('2.775557561565156540423631668E-17')

当您将 '0.1' 作为字符串传递时,小数被转换为浮点数,不会丢失精度,但是当您将浮点数直接作为 0.1 传递时,它会丢失精度,如下所示

>>> Decimal(0.1)
Decimal('0.1000000000000000055511151231257827021181583404541015625')
>>> Decimal('0.1')
Decimal('0.1')

这会导致各种奇怪的结果

>>> Decimal(0.3) - Decimal(0.1) + Decimal(0.1) + Decimal(0.1)
Decimal('0.3999999999999999944488848768')

引用自Decimal模块源码,解释的很好,如果输入是float,模块内部调用class方法"Decimal.from_float()":

Note that Decimal.from_float(0.1) is not the same as Decimal('0.1'). Since 0.1 is not exactly representable in binary floating point, the value is stored as the nearest representable value which is 0x1.999999999999ap-4. The exact equivalent of the value in decimal is 0.1000000000000000055511151231257827021181583404541015625.