Exploring the accumulation of rounding errors and gaining deeper insights into the processes at play on a more granular level, here is a concise breakdown:
Let's assume that the underlying software/hardware uses the IEEE 754 double precision standard with the default round to nearest even mode.
When converting 1/5 to binary (base 2), it results in an infinite repeating pattern:
0.00110011001100110011001100110011001100110011001100110011...
In floating point, however, the significand - starting from the most significant 1 bit - must be rounded to a finite number of bits (53):
This rounding introduces a small error when representing 0.2 in binary:
0.0011001100110011001100110011001100110011001100110011010
In decimal terms, this rounding discrepancy equates to a minute excess of 0.000000000000000011102230246251565404236316680908203125 over 1/5
The first operation, 0.2+0.2, remains exact as it's akin to multiplying by 2 without adding any extra error:
0.0011001100110011001100110011001100110011001100110011010
+ 0.0011001100110011001100110011001100110011001100110011010
---------------------------------------------------------
0.0110011001100110011001100110011001100110011001100110100
However, the accumulated excess above 2/5 doubles to 0.00000000000000002220446049250313080847263336181640625
The third operation, involving 0.2+0.2+0.2, yields a binary number that necessitates 54 bits of significand to represent accurately:
0.011001100110011001100110011001100110011001100110011010
+ 0.0011001100110011001100110011001100110011001100110011010
---------------------------------------------------------
0.1001100110011001100110011001100110011001100110011001110
To fit this result into a double, another rounding error occurs:
0.10011001100110011001100110011001100110011001100110100
Due to the default rounding behavior in floating-point operations, the number rounds up, further exacerbating the cumulative errors instead of neutralizing them...
Thus, the excess above 3/5 becomes 0.000000000000000088817841970012523233890533447265625
To mitigate this error accumulation slightly, you could use:
x1 = i / 5.0
Dividing by 5 ensures an exact representation in float (binary 101.0, needing only 3 significand bits), while i (up to 2^53) also avoids additional rounding errors, as per IEEE 754 standards guaranteeing the closest possible representation.
For instance, 3/5.0 is represented as:
0.10011001100110011001100110011001100110011001100110011
The decimal equivalent has an error of 0.00000000000000002220446049250313080847263336181640625 beneath 3/5
Note how these errors, though minuscule, differ significantly between the two cases: four times smaller in magnitude for 3/5.0 compared to 0.2+0.2+0.2