Are you sure that you used the same point in time to calculate the first 2 points? If C is correct then in average you should always find a point in or on a 1 minute timeframe.
Are you saying that the next block always happens within one minute of the current time? That isn't true. The
average expected(*) time to the next block is always one minute. It is often more than one minute to the next block.
Note that when staking you don't make "progress" to finding a block. You either find it or you don't. The same with Bitcoin mining. When you are waiting for a block, if you wait 5 minutes without a block being found, it doesn't mean a block is "due", or that a block "should" be found in the next 5 minutes. Even after waiting 5 minutes, the expected time to the next block is still 10 minutes. And if there is no block in the next 10 minutes, the expected time *after that* is still 10 minutes.
(*) I've been using the word "average" when I meant "expected".
Let's pick a random time, say Sat Jan 16 02:33:17 2016.
The previous block was found at 02:32:32 (45 seconds earlier).
The next block was found at 02:33:36 (19 seconds later)
The time to the next block was 19 seconds. The "average" time to the next block was 19 seconds, I guess, if you can average a constant. The average of 19 seconds is 19 seconds. But at that point in time the *expected* time to the next block was 60 seconds, even though at that time it had been 45 seconds since the previous block.
When i would chose a random point in the future then the next block after that point should be a half minute later, i think, too.
OK, let's test it.
I wrote a script which picks random points in time over the last month or so, then looks up the time to the previous and next block.
#!/usr/bin/env python
import random, string, time
def rand():
return start_date + random.random() * (end_date - start_date)
def fmt(seconds):
return '[%s]' % time.ctime(seconds)
def find(seconds):
last = None
for sec in times:
if sec < seconds:
return sec, last
last = sec
datfile = "clamblocks.dat"
count = 0
lines = 100000
samples = 100000
times = []
fp = open(datfile, "r")
while True:
line = fp.readline()
if not line:
break
line = string.split(line[:-1])
times.append(string.atoi(line[5]))
count += 1
if (count == lines):
break
start_date = times[-1]
end_date = times[0]
print "picking random dates between %s and %s" % (fmt(start_date), fmt(end_date))
before_sum = 0
after_sum = 0
count = 0
while True:
t = rand()
before, after = find(t)
before_sum += t - before
after_sum += after - t
count += 1
if count % 1000 == 0:
print ("(%6d) %s is %6.2f seconds after %s (%6.2f) and %6.2f seconds after %s (%6.2f)" %
(count,
fmt(t),
t - before, fmt(before), before_sum / count,
after - t, fmt(after), after_sum / count))
I happened to have the data in a file already, so it's a bit quicker than querying the clam daemon. But never mind that. Here's the output of the script. It shows the average times in (parentheses):
(200000) [Fri Nov 13 01:41:39 2015] is 67.13 seconds after [Fri Nov 13 01:40:32 2015] ( 52.78) and 44.87 seconds after [Fri Nov 13 01:42:24 2015] ( 52.96)
(201000) [Tue Dec 8 16:40:24 2015] is 40.73 seconds after [Tue Dec 8 16:39:44 2015] ( 52.76) and 7.27 seconds after [Tue Dec 8 16:40:32 2015] ( 52.97)
(202000) [Sun Jan 17 05:09:04 2016] is 16.23 seconds after [Sun Jan 17 05:08:48 2016] ( 52.77) and 31.77 seconds after [Sun Jan 17 05:09:36 2016] ( 52.97)
After picking 200k random points in time the average of all the actual times from the previous block is 52.78 seconds, and the average of the actual times to the next block is 52.97 seconds.
I'm surprised it's coming out around 53 seconds and not 60, but can imagine two explanations:
1) the CLAM network is always a little bit too fast; the average block time is 59.xx seconds, not 60 seconds; not a significant error
2) the average time to the next block is 60 seconds because it's possible (though unlikely) to have *very* long gaps between blocks; we're not seeing those very long gaps in the sample that I'm averaging over, but we are seeing lots of short gaps
Either way, it's closer to 60s than to 30s, and their sum is way over 60s.
Edit: I ran it again, using a year's worth of blocks, and let it run for longer. The results barely changed:
picking random dates between [Sun Jan 18 03:37:36 2015] and [Thu Jan 21 07:04:00 2016]
(276000) [Mon Feb 23 00:40:30 2015] is 62.58 seconds after [Mon Feb 23 00:39:28 2015] ( 52.74) and 289.42 seconds after [Mon Feb 23 00:45:20 2015] ( 52.76)
First, statements A and B don't make sense. The average amount of time from the chosen point in time is a singular number. You probably mean the average of a distribution of randomly chosen points.
You are correct. The average time to the next block from a particular random point in time is whatever the actual time to the next block was. I was being sloppy. I meant the expected time to the next block if the future wasn't already known.
You wake up, turn on your computer, look at blockchain.info. How long since the last block was found? Make a note. Wait for the next block; how long does it take from when we woke up? Make a note. Repeat this every day for a year, average times to the previous blocks, and average the times to the next blocks. Do you get something close to 5 minutes for both averages or something close to 10 minutes?
I think SebastianJu would tell us that on average we are half-way between blocks, so the average time would be 5 minutes to the previous and 5 minutes to the next. I'm claiming that the average is actually 10 minutes in both directions, and that the sum of the two averages would be 20 minutes.
But I am also claiming that the average time between BTC blocks is 10 minutes.
[/quote]
The error is in your final assertion, "Wouldn't you expect ...". No I wouldn't expect that.
Right. A+B = C is false. In fact A + B = 2C.
The expected time to previous block + the expected time to next block = twice the expected block time.