1/2500 = 0.04% is far from impossible. Not even really that unlikely.
Not only is it not unlikely, it's extremely likely and has SHOULD have happened multiple times.
It's not 0.04% of it happening ever in the lifetime of the universe. It's 0.04% chance *on any block* that the pool solves.
Yes it is 1/2500, that means it is expected once in 7 years within the continous operation of a pool of that size.
No, it is not linked with lifetime of the universe or with *any block*.
That gap was confirmed by slush, it is not a problem on his website.
This was an improbable event, lets not argue about the adjectives.
What I claim is:
The assumption it was linked to e.g. a technical problem, a withholding attack etc. is much stronger than that it was bad luck.
Because having e.g. technical problems more often than once in 7 years is quite feasible.
It was also disappointing if the discussion of this topic was about adjectives and Slush' operation and not about using
more advanced metrics for pool audit.
I used the above probability metrics while running the mining operation of CoinTerra, (was 3-5% of the total network in 2014)
and they were helpful to define error levels where an investigation of infrastructure was triggered.
The below graph shows the alert levels we used. The trigger was no block for n hours (y axis) with a maret share as of x axis.
The lines are:
- blue (watch) : check systems
- red (alert): elevated manual checks
- yellow (panic): search for a problem until you find it (You see that slush' example is deep in that range)
One might set different levels but the shapes should be ike this. We applied the model overall and on data centre level.