不好意思来挖坟了。
在学Javascript: the definition guide 时看到下面一段:
The IEEE-754 floating-point representation used by JavaScript (and just about every
other modern programming language) is a binary representation, which can exactly
represent fractions like 1/2, 1/8, and 1/1024. Unfortunately, the fractions we use most
commonly (especially when performing financial calculations) are decimal fractions
1/10, 1/100, and so on. Binary floating-point representations cannot exactly represent
numbers as simple as 0.1.
JavaScript numbers have plenty of precision and can approximate 0.1 very closely. But
the fact that this number cannot be represented exactly can lead to problems. Consider
this code:
<br />
var x = .3 - .2; // thirty cents minus 20 cents<br />
var y = .2 - .1; // twenty cents minus 10 cents<br />
x == y // => false: the two values are not the same!<br />
x == .1 // => false: .3-.2 is not equal to .1<br />
y == .1 // => true: .2-.1 is equal to .1<br />
</p>
Because of rounding error, the difference between the approximations of .3 and .2 is
not exactly the same as the difference between the approximations of .2 and .1. It is
important to understand that this problem is not specific to JavaScript: it affects any
programming language that uses binary floating-point numbers.
当时也是震惊了,跑到R里看,发现也是一样的。