Bitcoin Forum
November 02, 2024, 05:17:50 AM *
News: Latest Bitcoin Core release: 28.0 [Torrent]
 
   Home   Help Search Login Register More  
Pages: [1]
  Print  
Author Topic: adopting block chain orphans  (Read 1754 times)
adam3us (OP)
Sr. Member
****
expert
Offline Offline

Activity: 404
Merit: 362


in bitcoin we trust


View Profile WWW
May 03, 2013, 11:54:28 PM
Merited by ABCbits (1)
 #1

It seems to me that discarding orphan blocks outright loses their potential utility in hardening the byzantine voting on the transaction log.  ie work went into them, but currently they are given no weighting in the chain length (AFAIK).  Therefore to the extent they happen they weaken security because a 50% attacker wont accidentally create self-orphans on his hostile private chain.  Also maybe a 50% attacker will try to disrupt the network to induce network splits that increase chances of orphans (ie not slowing the network down, nor over-powering it, just fragmenting its power so that he ends up with as much power as the largest fragment to foist a 6-length chain and a sudden flurry of fragmented 5-length chains from a significantly net split network as he drops the net split attack).

Therefore for both reasons, how about this as an enhancement to make 50% attacks harder, and to make the network less vulnerable to net splits: blocks have a list of predecessor block hashes, rather than the current single predecessor.  A slow network slow node may reveal its block late (or equivalently may have just recovered from a net split attack), but can be included in the next round.  To validate a block for inclusion into a the predecessor list of a block, all that is required is the node agrees that all included blocks pass validation (no double spending etc) and dont contain mutually conflicting transactions.  Usual arbitration for two conflicting blocks as now (though potentially augmented with higher difficulty block wins - see variable difficulty below).

With this approach also faster, smaller transaction blocks could perhaps be used, even blocks with variable difficulty, opening possibility for direct pool free mining, and combating mining variance.

Reward is claimed incrementally in proportion to the difficulty of the block relative to the network difficulty.  When a block is used up no more reward can be claimed.  A small proportion of reward may need to be carried forward to incentivize later blocks to include the block in their predecessor block list.

(This idea for discussion is vaguely related to my post about 2002 amortizable hashcash paper - you could view the list of blocks as the same as the amortization list).

Some general concerns: more block packets, creates network scale limiting traffic increase?  (Are blocks getting too big anyway?)  Is the modified incremental block reward too complicated?  Is there a way to simplify it?  eg place limits on block sizes, and/or transaction fee maximum per block?  Maybe there is an alt-coin that already experimented in this direction?  Slightly related to p2pool (p2p pool implementation) but I think different in objective.

Adam

hashcash, committed transactions, homomorphic values, blind kdf; researching decentralization, scalability and fungibility/anonymity
adam3us (OP)
Sr. Member
****
expert
Offline Offline

Activity: 404
Merit: 362


in bitcoin we trust


View Profile WWW
May 04, 2013, 12:32:11 AM
 #2

With this approach also faster, smaller transaction blocks could perhaps be used, even blocks with variable difficulty, opening possibility for direct pool free mining, and combating mining variance.

Reward is claimed incrementally in proportion to the difficulty of the block relative to the network difficulty.  When a block is used up no more reward can be claimed.  A small proportion of reward may need to be carried forward to incentivize later blocks to include the block in their predecessor block list.
...

I'm sure someone's going to find some issue with the above.  But anyway some more thoughts: because its no longer a first past the post race (anyone can post  mined block of any difficulty any time), the elusive variance reducing techniques become safely possible (I think).  ie you can amortize your mining offline, and post it when you're ready to cash it in.  Subject to sensible messages sizes (you just need multiple nonces, one per challenge) you could reduce variance until its quite smooth.  Its clearly safe because whether you post them immediately, or post them in a batch later for a combined reward, its the same thing - just batching network packets.  Your only risk is to post them when the reward is all used up.

Maybe there's some way to adapt reward to be more continuous and adapted to on going unpooled mining.

Also in the interests of network traffic (re parent post) you probably dont want to retransmit the transactions you've seen already published in other blocks, so you could refer to them by block hash, and add more transactions.  In that way a block could even be quite small (adding no transactions) and yet claim high or even most of the reward.

Adam

hashcash, committed transactions, homomorphic values, blind kdf; researching decentralization, scalability and fungibility/anonymity
gmaxwell
Moderator
Legendary
*
expert
Offline Offline

Activity: 4270
Merit: 8805



View Profile WWW
May 04, 2013, 03:35:53 AM
 #3

But anyway some more thoughts: because its no longer a first past the post race

Mining is absolutely categorically emphatically undoubtedly NOT a "first past the post race". There is no upper bound on the number of blocks solved per unit time. When a new block is found on the network you simply switch to extending the new chain.  Someone else finding a block does not diminish your chances of finding a block, no work of yours is "lost" when the chain moves to the next state because none is accumulated... and the hashcash process is not a search of a finite space (there may be many solutions or no solutions for a particular header, indeed there is only one valid nonce in ten million headers these days).

The reason people pool is simply because receiving Bitcoins in quanta of tens of times your expected return per day is high variance.  I happily solo mine and have had days where my farm produces >>40x the expected return, and days when it gets nothing. On the average I'm a couple percent higher than the expected value. _It is not a race_, if it were I'd either never have a block at all or would only have a tiny fraction of what the expectation.

There is some overall work on the network lost due to chain fraying— but because all nodes experience latency there isn't a relative disadvantage created by this. As you note this creates some small advantage for an attacker— but it's only a very small amount so long as the time between blocks is large relative to network diameter (latency). If the time between blocks becomes small relative to diameter then the network will start having convergence failures and large reorgs (even absent an attacker). Controlling the time between blocks is also important for minimizing bandwidth and computation, especially for SPV nodes.  Amiller had made a nice suggestion regarding merging orphans for the purpose of making the block time dynamically adapt to the diameter, though that doesn't itself address keeping the network usable by SPV nodes.

FWIW, "P2pool" does solve the variance nicely— including allowing miners variable difficulty work (though confined to not result in shares faster than six per minute, to control the cost and prevent convergence problems)— without burdening the perpetually stored Bitcoin network with frequent tiny blocks.
TierNolan
Legendary
*
Offline Offline

Activity: 1232
Merit: 1104


View Profile
May 04, 2013, 10:51:14 AM
 #4

Mining is absolutely categorically emphatically undoubtedly NOT a "first past the post race". There is no upper bound on the number of blocks solved per unit time.

There is an effective upper limit due to network latency.  If someone else gets a block within the latency window, then they are costing you 50% of a block on average.

I don't think you actually need to have a rule for transaction merging.  In fact, you don't really need to check transactions at all.  However, if you want to make sure it was a real block, you could just check that the orphan block would have been valid given its previous block.

The orphan block might get 50% of the block reward, and no tx fees, to encourage broadcasting the headers.

Quote
There is some overall work on the network lost due to chain fraying— but because all nodes experience latency there isn't a relative disadvantage created by this.

One option here would be to define the POW for a chain so that orphans are simply included.  Nodes could store the headers for orphans.

Orphans before the fork would be equal for both chains anyway.

Sending orphan nodes would be required, so merging is a nice way to make sure all nodes have all orphans.

1LxbG5cKXzTwZg9mjL3gaRE835uNQEteWF
adam3us (OP)
Sr. Member
****
expert
Offline Offline

Activity: 404
Merit: 362


in bitcoin we trust


View Profile WWW
May 04, 2013, 10:59:34 PM
 #5

But anyway some more thoughts: because its no longer a first past the post race

Mining is [...] NOT a "first past the post race". There is no upper bound on the number of blocks solved per unit time. When a new block is found on the network you simply switch to extending the new chain.

While the effect is the same, I disagree: the race to claim transaction fees and reward is a first past the post race, because orphan blocks do not get to keep any of the fees nor reward (in the single winning chain approach).  The fact that miners will start a new race as soon as they learn that a past race is won, doesnt mean they are not engaging in a first past the post race (it just means they enjoy racing and immediately try the next race;)

The reason bitcoin mining is fair, despite the first past the post race, is that hashcash based proof-of-work is power-fair.

Hashcash proof of work is power-fair because as you alluded it has no memory (its like a coin toss, with no progress within the work, and all sequences of choices of nonces taking the same amount of work).  Most of the other proof of work functions do not have this power-fairness property (eg client-puzzles, amortizable hashcash, time-lock, Dwork-Naor pricing functions (maybe)). Scrypt is power-fair I think.  If scrypt turned out not to have the power-fair property its a security bug and people with fast processors will be able to get a disproportionate advantage.

However the need for power-fairness in the proof-of-work function is just because of the first past the post race choice.  For other cooperative race types it is not needed.

A way to see why power-fairness is needed in first past the post (and that bitcoin is a first past the post) is imagine the bitcoin proof of work was tweaked to use a simple non-power fair proof like amortizable hashcash with eg 256 smaller proof of works with same expected 10mins time total... 2.34 seconds per challenge.  (Amortizable here just means the challenge is to collect 256 sub-challenges.)  This achieves 16x lower standard deviation which is potentially desirable because it is achieved without incurring network traffic, neither on the main chain, nor on a p2pool chain.   With this approach you can see there is work-progress so it is no longer power-fair.  Ie a fast node is going to win races disproportionately even accounting for its power.

I made a racing car analogy for reduced variance in https://bitcointalk.org/index.php?topic=182252.msg1911750#msg1911750

Quote from: adam3us
A loose analogy imagine currently bitcoin miners are race cars.  Some are fast (ferrari) and some are slow (citroen 2cv) but they are all very very unreliable.  So who wins the race?  The ferrari mostly, but the 2cv still has a fair chance relative to its speed because the ferrari is really likely to break down.  With low variance coins, you have well maintained cars, and they very rarely break down.  So the ferrari wins almost always.  Now if you have a line of 20 cars of varying speeds, well maintained (low variance) the first 5 that are going to get past the post are almost certainly going to be the 5 fastest.  No one else stands a chance hardly.

You make some more points:

Controlling the time between blocks is also important for minimizing bandwidth and computation, especially for SPV nodes.  Amiller had made a nice suggestion regarding merging orphans for the purpose of making the block time dynamically adapt to the diameter, though that doesn't itself address keeping the network usable by SPV nodes.

FWIW, "P2pool" does solve the variance nicely— including allowing miners variable difficulty work (though confined to not result in shares faster than six per minute, to control the cost and prevent convergence problems)— without burdening the perpetually stored Bitcoin network with frequent tiny blocks.

Your points about increasing number of packets and slight bandwidth increase are valid downsides.
(I think the bandwidth increase would not have to be too large as nodes could refer to other variable cost blocks by block hash, they only need to add any additional transactions they have seen that are missing.)

I think I need to re-read p2pool a 2nd time to comment on the other bit.

Quote
If the time between blocks becomes small relative to diameter then the network will start having convergence failures and large reorgs (even absent an attacker).

Btw that sounds like a separate argument against alt-coins that shorten the block time interval.

Adam

hashcash, committed transactions, homomorphic values, blind kdf; researching decentralization, scalability and fungibility/anonymity
John Tobey
Hero Member
*****
Offline Offline

Activity: 481
Merit: 529



View Profile WWW
May 05, 2013, 12:27:15 AM
 #6

One option here would be to define the POW for a chain so that orphans are simply included.  Nodes could store the headers for orphans.

I suppose you mean only the first block in an orphaned chain is included, or perhaps the first few blocks.  Otherwise, it could be hard to distinguish "orphaned" chains from an attacker's chain or, if the attack is succeeding, from the honest chain.

Orphaned blocks help protect the network under this scheme, where chain POW is replaced by the product of transaction fee times work over all transactions in the chain and all blocks, including orphans.  This proposed system also features resistance to >> 50% attacks whose goal is to exclude a fraction of all transactions.

Can a change to the best-chain criteria protect against 51% to 90+% attacks without a hard fork?
gmaxwell
Moderator
Legendary
*
expert
Offline Offline

Activity: 4270
Merit: 8805



View Profile WWW
May 05, 2013, 01:04:01 AM
 #7

While the effect is the same, I disagree: the race to claim transaction fees and reward is a first past the post race,
For fees, yes, but only to the extent that there is not a backlog of available fees beyond what you can put into a block and that the fee difference is less than the lost subsidy (right now fees are something like 0.07% of mining income, so it's a non-issue).

If there are not enough transactions to fill the block there are other issues— e.g. the greedy-rational mining behavior can easily be to constantly try to orphan the current non-self best block until enough txn show up that you can't fit more and increase your income.  Retep (Peter Todd) proposes that people should be producing all their transactions nlocktimed based on how early an honest network would conceivable mine them, thus creating a constant base of transactions which _cannot_ be mined at the current height and producing an incentive to move forward. E.g. at a minimum you lock to be only mine-able at the current height, but if you don't need fast processing or you see that there is already a long backlog ahead of you you might lock at one or two blocks further in the future.


TierNolan
Legendary
*
Offline Offline

Activity: 1232
Merit: 1104


View Profile
May 05, 2013, 11:38:18 AM
 #8

I suppose you mean only the first block in an orphaned chain is included, or perhaps the first few blocks.  Otherwise, it could be hard to distinguish "orphaned" chains from an attacker's chain or, if the attack is succeeding, from the honest chain.

I think as long as orphans are headers only, then it shouldn't be that difficulty.  The orphans are purely for chain strengthening.  Miners get no bonus.

The rule could be that they broadcast their header first, and then their new block.  Ties are broken in favor of the header which arrived first (assuming you have the transactions for both blocks).

The creates an incentive to broadcast the headers, so it doesn't matter that there is no reward.

Allowing longer orphan chains means that you can run the main chain faster.  However, then you end up with orphan "foam" instead of a single main chain head.

1LxbG5cKXzTwZg9mjL3gaRE835uNQEteWF
adam3us (OP)
Sr. Member
****
expert
Offline Offline

Activity: 404
Merit: 362


in bitcoin we trust


View Profile WWW
May 05, 2013, 02:59:12 PM
 #9

Controlling the time between blocks is also important for minimizing bandwidth and computation, especially for SPV nodes.  

[...] (I think the bandwidth increase would not have to be too large as nodes could refer to other variable cost blocks by block hash, they only need to add any additional transactions they have seen that are missing.)

Towards reducing extra network packets, maybe the block-detached mining contributions could be piggy backed on transaction relay messages.

Then you could imagine the block grows in weight but only in size relative to the number of new transactions they contain as the  references to contributing blocks they build on top of can still have a limited fan out on as people contribute to them, each block gathering only current orphan blocks it has seen by reference (in its list of predecessor blocks, which would be implemented as a merkle tree).  The number of current orphan blocks should be constrained as every miner is trying to merge orphans, and soon as a block with an orphan merged reaches your node, it is no longer an orphan from your perspective.

If the reward can be correctly slanted, there is maybe no need for all peers to independently add transactions, which would be the most network costly aspect.

Maybe it can be a continuous process even - no 10 minute cut-off, as there is no longer a post to race.  The networks consensus of the biggest block just keeps growing in weight (the weight or value of all blocks seen so far).  Mining value is assigned to coins in proportion only to the amount they contributed to the ever and continuously growing distributed network mining activity.

I'll re-read SPV method to think about efficiency.  And p2pool.

Adam

hashcash, committed transactions, homomorphic values, blind kdf; researching decentralization, scalability and fungibility/anonymity
Pages: [1]
  Print  
 
Jump to:  

Powered by MySQL Powered by PHP Powered by SMF 1.1.19 | SMF © 2006-2009, Simple Machines Valid XHTML 1.0! Valid CSS!