So, the pool sends a header with a random miner id embedded, which cannot be changed. The miner tries to find the nonce that gives the lowest result from the hash function.

Sorry that part was unclear, what I had in mind was the pool would send a header with a random number embedded, the miner himself would append his bitcoin address to it, and then mine that. There would be a new (alt) bitcoin coin format which would include multiple hashcash outputs, eg say 100 outputs. That means that the first 100 or so (not an exact number mind as bitcoins have different values) first 100 or so miners of the pool to hit the minimum share difficulty get their part bitcoins added up by the pool, and the pool publishes the bitcoin.

I think my idea was a bit half-baked. Apart from that lack of clarity, there are two aspects of the amortizable hashcash concept - being able to add them (very approximately) and a metering function. Its probably the case that the metering function which requires under/over contribution prevention is irrelevant for pool related use, everyone wants to over-contribute, and thats encouraged. So lets say we remove the contribution protection (ie blinding value and u part). Then whats left? Just an alt bitcoin formed of a list of part-bitcoins, which has lower variance, and the owners of the parts can be different owners. The pool cant benefit from its miners work without revealing their coin addresses, so the pool cant skim. The downside is the coin gets bigger. However I do not think that initial mining events form a big part of the network traffic - isnt the transaction log the big deal, with all the fractional bitcoin change and combining?

The difficulty can be estimated as 1/(min result)? This can be done in parallel easily.

Correct the pool would set some minimum work factor to limit the network traffic from miners sending it part-bitcoins. I work in log2 of difficulty because thats the way hashcash was expressed, I think it clearer to think about really. The log difficulty right now is 55.1 bits (logdiff = log2( difficulty ) + 32 is the bitcoin formula its easy to see the difficulty visually in the hashes eg

http://blockexplorer.com/block/00000000000000bf11ad375a87a5670571ee432fbf629ba0e69e33860461bf84 then by counting leading 0s and multiplying by 4 bits per nibble - yes its 56 bit - you get lucky with an extra bit 1/2 time and two extra 1/4 time etc.)

In this idea the pool is mainly saving the miners the network overhead of keeping up with the transaction log traffic, otherwise they could just post their part-coins to the p2p network directly. Alternatively miners could broadcast their coins if they preferred. eg the whole network in a p2p sense could grab the first set of broadcast part-coins that added up to the current difficulty and hash them into the transaction log. In that way your part-bitcoins could go straight to the network bypassing the pool. Because the part-bitcoins are smaller and released faster that may create some micro forks, but perhaps the p2p voting can handle that.

If you have the miner submit back the n best nonces instead of the best, then variance is even lower.

You got it - could have the part-bitcoins themselves be composed of even smaller subpart-bitcoins (eg 32 of them) and then the miner has lower variance, and actually can measure progress, even print a progress bar that means something. (With single hash bitcoin mining there is no progress as they are like trying to toss 55 tails in a row with a coin - the coin has no memory).

Then while eg 1/128 of the difficulty is massive for most miners, the variance for mining is reduced, which is part of the miners problem. eg Say 128-part coins = 7 bits, which would make a mining share 48 bits (thats huge even for a 1500MH gpu even it would only have a 1/436 chance of creating a valid share in 10 mins - thats not good because no share = no direct payout).

(To elaborate for clarity, the serialization and definition changes I mean each microcoin would hash its owners coin address as part its self-chosen challenge. If the pool uses the clients hash - and it has an incentive to if it wants to win the pending 10 minute full-sized coin strip, and collect the bounty - then the pool contributor unavoidably gets the microcoin.

This doesn't help with variance, which is the whole point of the pool. It just shows a list of winners, right?

Correct. However you could use it recursively to have the miner create subpart-coins but each time you increase the number of parts the coins grow.

But I think there maybe a potential problem with multi-part coin low variance concept, imagine the extreme case where there are 1million part-coins, now there is practically NO variance; its almost completely deterministic and 100% related to your CPU power. Now the guy with the biggest GPU/ASIC farm is going to get the coin 100% of the time - for hashcash stamp anti-DoS that determinism is good, but for bitcoin however with its 10min lottery thats very bad - winner takes all with almost complete certainty. Even with modest numbers of part-coins the effect exists and stacks the reward in favor of the biggest CPU players, arguably the opposite of what you need if anything (in terms of centralization resistance). If its recursive with first 100 part-bitcoins past the post sharing the 25 bitcoins, with low variance part-bitcoins in the race (themselves made of subpart-bitcoins) you still have the same issue, fastest CPUs win.

A loose analogy imagine currently bitcoin miners are race cars. Some are fast (ferrari) and some are slow (citroen 2cv) but they are all very very unreliable. So who wins the race? The ferrari mostly, but the 2cv still has a fair chance relative to its speed because the ferrari is really likely to break down. With low variance coins, you have well maintained cars, and they very rarely break down. So the ferrari wins almost always. Now if you have a line of 20 cars of varying speeds, well maintained (low variance) the first 5 that are going to get past the post are almost certainly going to be the 5 fastest. No one else stands a chance hardly.

So I think the take away is you cant use low variance techniques for the underlying coins in any first (or top 10 etc) past the post race, which is what bitcoin 10min CPU lottery is in effect, because it is inherently unfairly stacked in favor of the fastest CPUs.

Thats kind of inconvenient and as you noted the only other variance reduction method discussed (that I saw) has been to reduce the difficulty (unpooled) or the share size (pooled). But that can increase bandwith requirements because there are lots of small coins flow up to the pool, or if direct to the whole network.

I made a

proposal to allow proving work. A node submits a claim and then a few blocks later submits proof. A number of hashes are pseudo-randomly selected based on block chain hashes for the next few block. The node submits the nonces for those hashes. The node must submit the proof in order to unlock their id token. In fact, now that I think about it, they could just include the proof with their next claim. The id token would just be a proof of work and is reused if they are honest. The value of the id token must be greater than the probability of being caught times the value of the hash claim.

I think I have to read about p2pool before I can understand what you wrote on that thread. It sounds like you plan a bit commitment to be later revealed.

This might be the same as what you meant but I was thinking about coin compactness and maybe it works for pools too that you could demand from the pool a hash including the main bitcoin transaction log hash plus the merkle hash tree of miner coin addresses using the pool, plus a log( #shares ) hash chain proof to the miner that his address is in the tree. That would seem to allow proof of contribution, however the generated bitcoin would be quite big as it would need to include ALL of the shares, but spends of the bitcoin would be compact just referencing the offset of their address in the generation coin. Alternatively the generated bitcoin could be compact, and the miner could be responsible to disclose the claim to the bitcoin at time of first use, which would bloat spends, and I believe thats worse because coins get created once but spent many times.

(And now I need to go read p2pool and then your other post. So much to catchup on!)

Adam