I wonder if this is kind of a religious thing. To float or not to float.
But we don't talk about Mac vs PC here. It's science, Mr. White. Science, bitch!
It should be math thing, but it becomes superstition
superstition not always good, even if it is good in some cases
math is math, you shouldnt be able to argue with math using superstition
error rate of representing 1/3 using 64 bit integer will depend on the least common denominator you use.
with floating point, the hardware deals with that
Now add 1/3, and a bunch of other fractions and some that are not fractions. With integer, did you use the right multiplier? Was the error rate better than floating point? Maybe it was, but unless you really know the dynamic range of your input numbers, odds are the error rate is better with floating point.
So to give people the most accurate allocation of nodecoins, I decided to use floating point and make sure I didnt make any errors. Any 64 bit integer implementation would at best approach the accuracy of using 64 bit floating point, in this use case.
You have to analyze each use case to know which is better to use. All things being equal, use integers.
Now, the financial guys say, divide 100 pennies three ways -> 33 + 33 + 34 = 100
They are happy that it is exact. However, I feel that is unfair as one of the three got an extra penny
For this, I am taking all this heat. But I cannot code any other way. I have to write what feels right to me.
James