Sorry for the noob question, but how is the 1700 Mh/s number calculated or gathered? I saw my local node's speed in the GUI. Didn't notice a network speed.
Relatively easy, using block timestamp + difficulty extracted from blocks.dat
for example, take the interval from block #68832 to #68976 (= 144 blocks = nominal 24h time period)
68832 was generated 2010/07/18 07:19
68976 was generated 2010/07/19 02:43
difference is 69862 sec
69862 sec / 144 blocks = ~485 sec/block
at a difficulty of 181.5, on average, a block gets found after roughly 2**32 * 181.5 tries
so ~ 780000M tries/block / 485 sec/block = ~ 1607M tries/sec
these numbers are of course influenced by the "randomness" of finding blocks, but they are a lot better than blind guesses.
If you are statistically inclined you can also calculate standard margin of error, use overlapping ranges to smooth the curve, ...
The blocks per hour try to remain steady, the difficulty just goes down. Basically he's saying "I made it hard for everyone because my machine(s) were winning every time", now that the machines are shut off, "you all should be winning the blocks everytime".
So the rate remains constant, but everyone should start generating more blocks (perhaps a few per day instead of 1 every 2 weeks like before).
The network adjusts difficulty every 2016 blocks to get to 6 blocks/hour average, basically factor = (14 * 24 * 60 * 60) / (actual # of seconds taken for the last 2016 blocks); factor = max(0.25, factor); factor = min(4, factor); new_difficulty = old_diffculty * factor
And, as I said, the total hash rate after he supposedly stopped his nodes went UP.
If things continue like this, on the next iteration we'll end up with a difficulty increase to ~ 230.
Which means then at 1000khash/s you'll generate a block every ~11d10h on average, which comes out to ~ 0.1825BTC/h