Bitcoin Forum
April 26, 2024, 04:27:24 AM *
News: Latest Bitcoin Core release: 27.0 [Torrent]
 
   Home   Help Search Login Register More  
Pages: [1]
  Print  
Author Topic: Faster blocks vs bigger blocks  (Read 2507 times)
Cubic Earth (OP)
Legendary
*
Offline Offline

Activity: 1176
Merit: 1018



View Profile
July 01, 2014, 08:31:24 PM
 #1

If there is a hard fork to increase the maxblocksize, say to 5MB or 10MB, wouldn't it make sense at that point to drop the block interval to 5 minutes, and raise the maxblocksize by only half of what would otherwise have been done?  The benefit would be faster confirmations, which is helpful in many cases (and I would rather not argue that point).

What would the drawbacks be?  Double the load for SPV servers.  That seems manageable.  Orphan rates should be negligibly higher, but the time to propagate a block scales lineally with transaction payload.  I am assuming empty blocks propagate very quickly and I am assuming since I don't know.

gmaxwell has previously raised concerns about network convergence times.  But isn't that mostly a function of the total amount of data that is attempting to stay in sync?  Wouldn't convergence basically be the same for 10MB blocks every 10 minutes as for 5MB blocks every 5 minutes?  Granted that as block time approaches zero, convergence will be a problem.  But for just cutting it in half?  I can't see why convergence rates wouldn't be similar.

I know the subject of confirmation times has been discussed many times before.  This time I am particularly interested in discussing it in the context of (1) larger blocks which would be (2) part of a hard fork.


Below are some of gmaxwell's points from a related discussion.

https://bitcointalk.org/index.php?topic=260180.0;all


(1) Orphaning rate depends on the time relative to communications & validation delay (formula given in the link).  At the limit as the block-time goes to zero the network will stop converging and typical reorganizations tend to infinitely long.  The actual delays depend on network topography and block size. And as an aside— in the past we've seen global convergence times on Bitcoin get up to over two minutes, although the software performance has been improved since then it doesn't seem that there a ton of headroom before convergence failures would be likely in practice, certainly fast convergence is harder with larger blocks.

(1a) There have been altcoins who didn't understand this and set their block times to be stupidly low and suffered pretty much instant convergence failure (e.g. liquidcoin). There are other ones that may start failing if they ever get enough transaction volume that validation actually takes a bit of time.

(2) The computational/bandwidth/storage cost of running a SPV node, or query a remote computation oracle for signing, or to present a bitcoin proof in a non-bitcoin chain is almost entirely due to the header rate. Going to 5 minutes, for example, would double these costs. Increasing costs for the most cost-sensitive usages is not very attractive.

(3) With the exception of 1 confirmation transactions, once you are slow enough that orphaning isn't a major consideration there is no real security difference that depend on the particular rate. For moderate length attacks sum computation matters and how you dice it up doesn't matter much. One confirm security— however— isn't particular secure.

(3a)  If there is actually a demand for fast low security evidence of mining effort,  you can achieve that simply by having miners publish shares like P2Pool does. You could then look at this data and estimate how much of the network hashrate is attempting to include the transaction you're interested in.  This doesn't, however, create the orphaning/convergence problems of (1) or the bandwidth/storage impact on disinterested nodes of (2).

(3b) Because mining is a stochastic lottery confirmations can take a rather long time even when the mean is small. Few things which you can describe as "needing" a 2 minute mean would actually still be happy with it taking 5 times that sometimes. Those applications simply need to use other mechanisms than global consensus as their primary mechanism.

(4) While you can debate the fine details of the parameters— perhaps 20 minutes or 5 minutes would have been wiser— because of the above none of the arguments are all that compelling.  Changing this parameter would require the consent of all of the surviving Bitcoin users, absent a really compelling argument it simply isn't going to happen.

If you'd like to explore these ideas just as an intellectual novelty,  Amiller's ideas about merging in evidence of orphaned blocks to target an orphaning rate instead of a time are probably the most interesting—  the problem then becomes things like how to prevent cliques of fast miners self-centralizing against further away groups who can't keep up, and producing proofs for SPV clients which are succinct in the face of potentially quite fast blocks.
1714105644
Hero Member
*
Offline Offline

Posts: 1714105644

View Profile Personal Message (Offline)

Ignore
1714105644
Reply with quote  #2

1714105644
Report to moderator
1714105644
Hero Member
*
Offline Offline

Posts: 1714105644

View Profile Personal Message (Offline)

Ignore
1714105644
Reply with quote  #2

1714105644
Report to moderator
You get merit points when someone likes your post enough to give you some. And for every 2 merit points you receive, you can send 1 merit point to someone else!
Advertised sites are not endorsed by the Bitcoin Forum. They may be unsafe, untrustworthy, or illegal in your jurisdiction.
gmaxwell
Moderator
Legendary
*
expert
Offline Offline

Activity: 4158
Merit: 8382



View Profile WWW
July 01, 2014, 09:53:56 PM
 #2

Wouldn't convergence basically be the same for 10MB blocks every 10 minutes as for 5MB blocks every 5 minutes?
No, lowering the time lowers the distribution parameter and increases the blocks found at the same time, this is true even if there is no serialization delay. Plus— there is no technical need to have to send N megabytes of data when a block is created, the vast majority can be preforwarded... but only P2Pool makes use of that today.

Would half work? Yes, it would almost certantly be fine today, but it's precisely in cases where you might need to do more work to process a block that the longer times are important.

You also don't need faster interblock times to have faster confirmation. Consider what P2Pool does, if the network enforced participating in a faster share chain for miners and that sharechain enforced transaction inclusion you could have much faster confirmations (though, of course, with reduced security)... importantly, SPV clients and others that don't care about the fast confirmations wouldn't need to pay attention to them— so the benefit could be had without doubling the bandwidth required for SPV nodes.
Sergio_Demian_Lerner
Hero Member
*****
expert
Offline Offline

Activity: 549
Merit: 608


View Profile WWW
July 02, 2014, 03:27:20 AM
 #3

I will give you my opinions on gmaxwell points of views. My opinion is backed up with several simulations and successfully lowering the block interval  to 5 seconds, and a cryptocurrency (Nimblecoin.org) that works with 5 seconds interval:


(1) Orphaning rate depends on the time relative to communications & validation delay (formula given in the link).

True, but transactions can be and are  in practice pre-verified by nodes, so validation delay is zero.  Communication time can also decrease considerably using header-only block propagation. Using DECOR+GHOST protocols you remove selfish mining incentive and increase best-chain choice  convergence.

The limit where things start to fail is using a block interval of about 2 seconds.

(1a) There have been altcoins who didn't understand this and set their block times to be stupidly low and suffered pretty much instant convergence failure (e.g. liquidcoin).

100% true, but they were not using DECOR+GHOST protocols

(2) The computational/bandwidth/storage cost of running a SPV node ..  is almost entirely due to the header rate. Increasing costs for the most cost-sensitive usages is not very attractive.
Half true, because SPV security is terribly weak anyway: it requires trust in peers and most implementations of SPV rely in a single trusted node.
I prefer SmartSPV security, which does not need to download all headers.
But I would also prefer to trust a single US company to send me each block to my smartphone (such as coinbase) than to trust a random Bitcoin node.
Anyway, I suppose that an increase in 5X in the number of headers is completely tolerable to almost any SPV node.

(3) With the exception of 1 confirmation transactions, once you are slow enough that orphaning isn't a major consideration there is no real security difference that depend on the particular rate. For moderate length attacks sum computation matters and how you dice it up doesn't matter much. One confirm security— however— isn't particular secure.

Not really, the most important factor is the number of confirmations, and not the accumulated work. So a DECOR+GHOST alt-coin with 1 minute interval takes 10 times less than Bitcoin to confirm with the same security.

(3a)  If there is actually a demand for fast low security evidence of mining effort,  you can achieve that simply by having miners publish shares like P2Pool does. You could then look at this data and estimate how much of the network hashrate is attempting to include the transaction you're interested in. 

A 4-block confirmation in Nimblecoin.org (about 20 seconds) has a 0.1% reversal probability.
In the current p2pool you would need 1000 shares to achieve the same level of confidence or 8 hours.
A better p2pool could be created where each better-p2pool share must include all the previous share transactions. For this better-p2pool to achieve the same with shares, you would need 4 new-p2pool blocks (about 2 minutes) to have the same guarantee.
But taking into account that p2pool only has a small share of the total hashing power, you cannot expect secure confirmations from p2pool.

(3b) Because mining is a stochastic lottery confirmations can take a rather long time even when the mean is small. Few things which you can describe as "needing" a 2 minute mean would actually still be happy with it taking 5 times that sometimes. Those applications simply need to use other mechanisms than global consensus as their primary mechanism.

With a 5-seconds block epoch, and 4 confirmations, the described bad case (which happens once per week probably) would take less than two minutes. Pretty good for any application to have 2 minutes delay once a week.

(4) While you can debate the fine details of the parameters— perhaps 20 minutes or 5 minutes would have been wiser— because of the above none of the arguments are all that compelling.  Changing this parameter would require the consent of all of the surviving Bitcoin users, absent a really compelling argument it simply isn't going to happen.
It's true, but Bitcoin may have to face competition in the future. I think that a really compelling argument will be when other alt-coins achieve 1000 tps and 10 seconds average confirmation and Bitcoin won't. Bitcoin probably cannot go down to 5 seconds but it can surely go down to 30 seconds.

Best regards, Sergio.
Gavin Andresen
Legendary
*
qt
Offline Offline

Activity: 1652
Merit: 2216


Chief Scientist


View Profile WWW
July 03, 2014, 02:56:20 PM
 #4

It seems to me having miners share 'near-miss' blocks with each other (and the rest of the world) does several good things.

As Greg say, that tells you how much hashing power is including your not-yet-confirmed transaction, which should let merchants reason better about the risk of their transactions being double-spent.

If the protocol is well-designed, sharing near-miss blocks should also make propagation of complete blocks almost instantaneous most of the time. All of the data in the block (except the nonce and the coinbase) is likely to have already been validated/propagated. See Greg's thoughts on efficient encoding of blocks:  https://en.bitcoin.it/wiki/User:Gmaxwell/block_network_coding

So there could almost always be no advantage to working on a smaller block rather than a larger block (it would be very rare to find a full-difficulty block before finding-- say-- a 1/100'th difficulty block).

Near-instant block propagation if you 'show your work' should give un-selfish miners an advantage over miners who try any kind of block withholding attack. And it should make network convergence quicker in the case of block races; miners could estimate how much hashing power is working on each fork when there are two competing forks on the network, and rational miners will abandon what looks like a losing fork as soon as it looks statistically likely (based on the previous-block pointers for near-miss blocks they see) that they're on the losing fork.

We can do all of this without a hard fork. It could even be prototyped as an ultra-efficient "miner backbone network" separate from the existing p2p network-- in fact, I'm thinking it SHOULD be done first as a separate network...

How often do you get the chance to work on a potentially world-changing project?
chriswilmer
Legendary
*
Offline Offline

Activity: 1008
Merit: 1000


View Profile WWW
July 04, 2014, 12:10:51 AM
 #5

It seems to me having miners share 'near-miss' blocks with each other (and the rest of the world) does several good things.

As Greg say, that tells you how much hashing power is including your not-yet-confirmed transaction, which should let merchants reason better about the risk of their transactions being double-spent.

If the protocol is well-designed, sharing near-miss blocks should also make propagation of complete blocks almost instantaneous most of the time. All of the data in the block (except the nonce and the coinbase) is likely to have already been validated/propagated. See Greg's thoughts on efficient encoding of blocks:  https://en.bitcoin.it/wiki/User:Gmaxwell/block_network_coding

So there could almost always be no advantage to working on a smaller block rather than a larger block (it would be very rare to find a full-difficulty block before finding-- say-- a 1/100'th difficulty block).

Near-instant block propagation if you 'show your work' should give un-selfish miners an advantage over miners who try any kind of block withholding attack. And it should make network convergence quicker in the case of block races; miners could estimate how much hashing power is working on each fork when there are two competing forks on the network, and rational miners will abandon what looks like a losing fork as soon as it looks statistically likely (based on the previous-block pointers for near-miss blocks they see) that they're on the losing fork.

We can do all of this without a hard fork. It could even be prototyped as an ultra-efficient "miner backbone network" separate from the existing p2p network-- in fact, I'm thinking it SHOULD be done first as a separate network...


Let's do it! Is this something we can list on Mike Hearn's Lighthouse? I'd pledge 10 bitcoins (should consult with wife first... but I think she'd agree it was a worthy cause!)
Sergio_Demian_Lerner
Hero Member
*****
expert
Offline Offline

Activity: 549
Merit: 608


View Profile WWW
July 04, 2014, 12:37:16 AM
 #6

This is my plan:
 
When NimbleCoin is ready, we test it thoughtfully. If you want to donate to the NimbleCoin project, I would happily use the money to hire another programmer and finish it faster. I'm trying that NimbleCoin does not have pre-mining because I don't like pre-selling coins. If NimbleCoin succeeds, then we can implement every feature back on Bitcoin with a hardfork. If it fails, then we still have the open source code to experiment with.

NimbleCoin is implemented on Bitcoinj, so we'll have a known reference code to play with.

Best regards,
 Sergio.


Pages: [1]
  Print  
 
Jump to:  

Powered by MySQL Powered by PHP Powered by SMF 1.1.19 | SMF © 2006-2009, Simple Machines Valid XHTML 1.0! Valid CSS!