Bitcoin Forum
May 02, 2024, 01:26:16 PM *
News: Latest Bitcoin Core release: 27.0 [Torrent]
 
   Home   Help Search Login Register More  
Pages: « 1 2 3 [4] 5 6 7 8 9 »  All
  Print  
Author Topic: New paper: Accelerating Bitcoin's Trasaction Processing  (Read 36280 times)
johnyj
Legendary
*
Offline Offline

Activity: 1988
Merit: 1012


Beyond Imagination


View Profile
December 07, 2013, 02:23:39 PM
 #61

If drop this value from 1 block per second to 1 block per 30 second, then TPS will drop to 7, the same as current design Huh

You need to multiply blocksize b as well.

This is what confuses me, if 32 transactions per KB, a 320KB block will make it 10240 transactions per block, not the 242.4 transactions per second that stated on that paper page 21

But since they are using a different method to propagate the blocks, maybe what they said is true? Unfortunately that 242.4 TPS is given by a formula which is unclear for its meaning, I have to read more

1714656376
Hero Member
*
Offline Offline

Posts: 1714656376

View Profile Personal Message (Offline)

Ignore
1714656376
Reply with quote  #2

1714656376
Report to moderator
1714656376
Hero Member
*
Offline Offline

Posts: 1714656376

View Profile Personal Message (Offline)

Ignore
1714656376
Reply with quote  #2

1714656376
Report to moderator
Whoever mines the block which ends up containing your transaction will get its fee.
Advertised sites are not endorsed by the Bitcoin Forum. They may be unsafe, untrustworthy, or illegal in your jurisdiction.
1714656376
Hero Member
*
Offline Offline

Posts: 1714656376

View Profile Personal Message (Offline)

Ignore
1714656376
Reply with quote  #2

1714656376
Report to moderator
1714656376
Hero Member
*
Offline Offline

Posts: 1714656376

View Profile Personal Message (Offline)

Ignore
1714656376
Reply with quote  #2

1714656376
Report to moderator
1714656376
Hero Member
*
Offline Offline

Posts: 1714656376

View Profile Personal Message (Offline)

Ignore
1714656376
Reply with quote  #2

1714656376
Report to moderator
Peter Todd
Legendary
*
expert
Offline Offline

Activity: 1120
Merit: 1150


View Profile
December 07, 2013, 03:50:50 PM
 #62

Quote
1) Advantage of nodes with low latency. Nodes that can reach the rest of the network quickly have more blocks on the main chain and less orphans.

Answer: This is already going on in the current bitcoin protocol. We don't improve things, but our modification doesn't hurt either.

Actually it makes the problem significantly worse as more orphans leads to more opportunities for a larger miner to have an advantage over a smaller one through orphans.

Note the equations for P(Q,L) in the paper I'm working on analyzing centralization incentives - they all depend on the block interval in such a way that a smaller block interval makes the larger hashing power more significant.

I suggest you correct your post.

johnyj
Legendary
*
Offline Offline

Activity: 1988
Merit: 1012


Beyond Imagination


View Profile
December 07, 2013, 04:04:23 PM
 #63

A practical consideration: Just like Christian Decker and Roger Wattenhofer reported, a 320KB block would take 80 seconds to reach 90% of nodes, or 40 seconds for 50% of nodes. In such a case any block generated faster than 40 seconds can not reach half of the network

And if blocks were generated every 5 seconds, most of them will only reach a tiny part of the whole network no matter how small they are. So the whole network will generate lots of conflicting view of which subtree is the biggest subtree, thus become segregated

How can this problem be resolved?

kcirazy
Newbie
*
Offline Offline

Activity: 53
Merit: 0



View Profile
December 07, 2013, 04:15:16 PM
 #64

I'm working on analyzing centralization incentives

I suggest that the value of a bitcoin is directly related to the trust in the decentralization of the system.
And the level of trust in Bitcoin is somewhat related to its (de)centralization. The higher the centralization, the lower the value per block.
I don't see any mention of that in your paper about 'centralization incentives'. Shouldn't that be part of the equation?
Peter Todd
Legendary
*
expert
Offline Offline

Activity: 1120
Merit: 1150


View Profile
December 07, 2013, 04:32:46 PM
 #65

I'm working on analyzing centralization incentives

I suggest that the value of a bitcoin is directly related to the trust in the decentralization of the system.
And the level of trust in Bitcoin is somewhat related to its (de)centralization. The higher the centralization, the lower the value per block.
I don't see any mention of that in your paper about 'centralization incentives'. Shouldn't that be part of the equation?

It's an externality. If your logic was right, coal power plants wouldn't exist, but they do.

zimmah
Legendary
*
Offline Offline

Activity: 1106
Merit: 1005



View Profile
December 07, 2013, 04:40:09 PM
 #66

if a block will be created every second on average, blocks will be generated so fast that it will create huge problems.

the network will not be able to verify the order of the blocks, and it will be impossible to know which chain is the real chain.

the whole reason the block-chain handles transactions is to prevent double spending by abusing lagg that would mess up the order of the transaction, because the network disagrees about which transaction happened first.

There's no way the whole network can confirm a block within a second.
tucenaber
Sr. Member
****
Offline Offline

Activity: 337
Merit: 252


View Profile
December 07, 2013, 04:55:01 PM
 #67

There's no way the whole network can confirm a block within a second.

Clearly you have not read the paper.
iddo
Sr. Member
****
Offline Offline

Activity: 360
Merit: 251


View Profile
December 07, 2013, 05:10:02 PM
 #68

After some discussion at #bitcoin-wizards (freenode IRC) with Aviv Zohar and several Bitcoin devs, it seems to me that there is an advantage to the current Bitcoin rule for selecting the best chain, over the proposed rule in this new paper.

First note that there is ongoing research by Bitcoin devs to enable lite nodes verify whether txns are valid (link1, link2, link3), which may also allow us to remove the checkpoints in the client because new nodes can be bootstrapped without having to verify the signatures in the entire history, hence the Bitcoin system will be more trust-free.

Consider an attacker who creates many difficulty-1 orphans that extend the genesis block (which requires relatively low PoW effort) and thereby DoS-attacks the other nodes by bloating their local copy of the blockchain with these many orphans. Note that headers-first sync will mitigate this attack somewhat, because the nodes will only fetch and keep the headers of the orphans, but it wouldn't fully mitigate this attack. Right now, Bitcoin is protected from this attack because of the checkpoints, as clients will reject orphans at genesis because of the later checkpoints.

If the checkpoints are removed, I think that Bitcoin can still have anti-DoS mechanisms against this kind of an attack. For example, the most naive anti-DoS protection would be for the node to have some quota and not accept more than certain amount of forks that extend an (old) block, with small risk that it may need to request those rejected blocks later from peers, and waste communication.

So, under the assumption that eliminating checkpoints to have a zero-trust system is a worthy goal, the question is whether we can have anti-DoS protection with the proposed rule for best chain selection of this new paper. Depending on how exactly the nodes will reject blocks that are suspected to be attack-driven, I think that there is a danger that it could cause netsplits and that the network won't re-converge. Even if the network will indeed be convergent, the communication complexity could be significantly greater, as nodes will frequently need to prove to other nodes (who rejected or deleted blocks due to anti-DoS protection) that they have a subtree with a bigger weight?

I think that there are pros/cons to this new proposed rule versus the Bitcoin rule (the complexity needed for lite nodes is probably another disadvantage, and of course there are also significant advantages as discussed in the paper). At this particular moment in time I'd feel more secure to exchange my fiat money for bitcoins than for a cryptocoin that implements the newly proposed chain selection rule Smiley But I'm of course open the possibility that the new rule will lead to a system that's more secure overall.

maaku
Legendary
*
expert
Offline Offline

Activity: 905
Merit: 1011


View Profile
December 07, 2013, 06:03:17 PM
 #69

iddo, I think there are two things to point out with respect to that wizards' conversation (where we didn't really reach consensus):

1) The DoS danger can be avoided with some trivial heuristics, for example: drop or ignore orphaned blocks with less than factor X of the difficulty of the currently accepted best block, and drop blocks furthest from the current best block when orphan storage exceeds Y megabytes/gigabytes. In reality there are a spectrum of possibilities between nodes having omniscient knowledge about all forks (pure GHOST), and nodes blindly following the most-work chain (bitcoin current). Even a heuristic-limited, DoS-safe version somewhere in the middle of that spectrum would be an improvement over today.

2) This is a local-only change that does not require consensus. It's okay for light nodes to still follow the most-work chain. What this change does is provide miners specifically with a higher chance of ending up on the final chain, by better estimating which fork has the most hash power behind it.

3) What this patch actually accomplishes is a little obscured by the hyperbole of the OP. There are alt coins which had a constant difficulty, and those which had incredibly low block intervals. I'm speaking in the past tense they basically imploded when their network size increased and the propagation time approached the block interval. This patch wouldn't fix that problem (because eventually you run out of bandwidth), but it does let you get a lot closer to the network's block propagation time before witnessing catastrophic consequences. This is important because when we increase the maximum transaction processing rate of bitcoin, it will be by either dropping the block interval or increasing the maximum block size (both of which are hard forks), and how far we can safely do that is bounded by this limit, among others.

I'm an independent developer working on bitcoin-core, making my living off community donations.
If you like my work, please consider donating yourself: 13snZ4ZyCzaL7358SmgvHGC9AxskqumNxP
avivz78 (OP)
Newbie
*
Offline Offline

Activity: 21
Merit: 0


View Profile WWW
December 07, 2013, 06:15:18 PM
 #70

Quote
1) Advantage of nodes with low latency. Nodes that can reach the rest of the network quickly have more blocks on the main chain and less orphans.

Answer: This is already going on in the current bitcoin protocol. We don't improve things, but our modification doesn't hurt either.

Actually it makes the problem significantly worse as more orphans leads to more opportunities for a larger miner to have an advantage over a smaller one through orphans.

Note the equations for P(Q,L) in the paper I'm working on analyzing centralization incentives - they all depend on the block interval in such a way that a smaller block interval makes the larger hashing power more significant.

I suggest you correct your post.


Peter, I agree with you that there is a big problem with high orphan rates, but this is not a symptom of GHOST, but rather a symptom of high transaction rates.

Whether you use GHOST or Longest-chain at high rates you must either resort to large blocks that propagate slowly or to high block creation rates. There is no getting around that. Both cause a high orphan rate (or perhaps the right term to use is high rate of off-chain blocks). we do not pretend GHOST solves this problem The only thing we claim is that GHOST makes the protocol secure at high rates -- the 50% attack can no longer be executed with less than 50% of the hash rate.
mission
Newbie
*
Offline Offline

Activity: 21
Merit: 0


View Profile
December 07, 2013, 07:03:41 PM
 #71

Some questions to devs please: what's the chance to see it implemented in Bitcoin? Would it take very long? What's the biggest obstacle?
maaku
Legendary
*
expert
Offline Offline

Activity: 905
Merit: 1011


View Profile
December 07, 2013, 07:58:24 PM
 #72

Some questions to devs please: what's the chance to see it implemented in Bitcoin? Would it take very long? What's the biggest obstacle?

It would need to be post-pruning to prevent DoS attacks.

EDIT: But to be clear, it gives ~0 benefit until you hard-fork to significantly increase the transaction rate.

I'm an independent developer working on bitcoin-core, making my living off community donations.
If you like my work, please consider donating yourself: 13snZ4ZyCzaL7358SmgvHGC9AxskqumNxP
Peter Todd
Legendary
*
expert
Offline Offline

Activity: 1120
Merit: 1150


View Profile
December 07, 2013, 08:46:52 PM
 #73

Peter, I agree with you that there is a big problem with high orphan rates, but this is not a symptom of GHOST, but rather a symptom of high transaction rates.

Whether you use GHOST or Longest-chain at high rates you must either resort to large blocks that propagate slowly or to high block creation rates. There is no getting around that. Both cause a high orphan rate (or perhaps the right term to use is high rate of off-chain blocks). we do not pretend GHOST solves this problem The only thing we claim is that GHOST makes the protocol secure at high rates -- the 50% attack can no longer be executed with less than 50% of the hash rate.

Going back to your original post:

Quote
We note that block propagation times are the primary obstacle for scalability.

The obstacle to scalability is keeping Bitcoin decentralized while scaling up; we know that Bitcoin can scale if we sacrifice decentralization - Visa and Mastercard are doing just fine. Ultimately you're proposing something that solves one problem - the high granularity of confirmations and the long wait associated with them - at the expense of scalability with decentralization. So don't claim you've done anything other than presented an interesting academic analysis of a specific trade-off possible in the system.

This also suggests why the Bitcoin community has talked about the underlying idea in your paper repeatedly(1) among themselves, but no-one has ever bothered to investigate it more fully - it's obviously a bad trade-off in the context of proof-of-work. (with the possible exception of applying the idea to the p2pool share-chain)

h
1) I myself proposed it for my zookeyv key-value global consensus system proposal - #bitcoin-wizards 2013-05-31 - though in the context of proof-of-sacrifice.

iddo
Sr. Member
****
Offline Offline

Activity: 360
Merit: 251


View Profile
December 07, 2013, 09:21:48 PM
 #74

iddo, I think there are two things to point out with respect to that wizards' conversation (where we didn't really reach consensus):

This is true, so far no one claimed that this chain selection rule cannot withstand DoS (when checkpoints are removed), only that it's unclear whether it can withstand DoS.

1) The DoS danger can be avoided with some trivial heuristics, for example: drop or ignore orphaned blocks with less than factor X of the difficulty of the currently accepted best block, and drop blocks furthest from the current best block when orphan storage exceeds Y megabytes/gigabytes. In reality there are a spectrum of possibilities between nodes having omniscient knowledge about all forks (pure GHOST), and nodes blindly following the most-work chain (bitcoin current). Even a heuristic-limited, DoS-safe version somewhere in the middle of that spectrum would be an improvement over today.

Those heuristics are trivial, whether they work is less trivial Smiley It seems that when you do such heuristics with the GHOST chain selection rule, it's more sensitive (compared to Bitcoin) to netsplits/convergence issues, and/or to communication blowup as nodes might need to negotiate with their peers regarding which missing solved blocks should be re-transmitted. This will of course be easier to discuss according to a certain specific proposal for anti-DoS rules, but it's probably difficult to design on paper the best rules and have a theoretical proof that they work, therefore (as mentioned in #bitcoin-wizards) the thing that we really need is to do simulations.

2) This is a local-only change that does not require consensus. It's okay for light nodes to still follow the most-work chain. What this change does is provide miners specifically with a higher chance of ending up on the final chain, by better estimating which fork has the most hash power behind it.

Not sure why you call it "local-only change", because as we are discussing here, it appears that there's a risk for netsplits that don't re-converge, especially if different nodes will follow different rules.

3) What this patch actually accomplishes is a little obscured by the hyperbole of the OP. There are alt coins which had a constant difficulty, and those which had incredibly low block intervals. I'm speaking in the past tense they basically imploded when their network size increased and the propagation time approached the block interval. This patch wouldn't fix that problem (because eventually you run out of bandwidth), but it does let you get a lot closer to the network's block propagation time before witnessing catastrophic consequences. This is important because when we increase the maximum transaction processing rate of bitcoin, it will be by either dropping the block interval or increasing the maximum block size (both of which are hard forks), and how far we can safely do that is bounded by this limit, among others.

Agreed.
maaku
Legendary
*
expert
Offline Offline

Activity: 905
Merit: 1011


View Profile
December 07, 2013, 09:36:21 PM
 #75

2) This is a local-only change that does not require consensus. It's okay for light nodes to still follow the most-work chain. What this change does is provide miners specifically with a higher chance of ending up on the final chain, by better estimating which fork has the most hash power behind it.

Not sure why you call it "local-only change", because as we are discussing here, it appears that there's a risk for netsplits that don't re-converge, especially if different nodes will follow different rules.
There is no possibility for non-convergence. The most-work tree eventually wins out. Always.

I'm an independent developer working on bitcoin-core, making my living off community donations.
If you like my work, please consider donating yourself: 13snZ4ZyCzaL7358SmgvHGC9AxskqumNxP
avivz78 (OP)
Newbie
*
Offline Offline

Activity: 21
Merit: 0


View Profile WWW
December 07, 2013, 09:40:52 PM
 #76

Going back to your original post:

Quote
We note that block propagation times are the primary obstacle for scalability.

The obstacle to scalability is keeping Bitcoin decentralized while scaling up; we know that Bitcoin can scale if we sacrifice decentralization - Visa and Mastercard are doing just fine. Ultimately you're proposing something that solves one problem - the high granularity of confirmations and the long wait associated with them - at the expense of scalability with decentralization. So don't claim you've done anything other than presented an interesting academic analysis of a specific trade-off possible in the system.

Peter,

I keep trying to agree with you.

One last try before I give up:

Surely you must agree that if there were no block propagation delays, there would be no centralization associated with scaling up. Blocks go instantly to everyone, and there are no problems with miners getting more than their share of the blocks.

I guess the difference in our perspectives is that I don't think these are optional tradeoffs. If at some point bitcoin becomes mainstream, there will be an increased pressure to allow more transactions through (or alternatively, fees will soar higher than the money transmitters Bitcoin is trying to replace). You then run into several problems. One is a security problem against attackers. The other is a pull towards centralization. All I'm saying is that we claim to solve the first problem. The second clearly remains.
avivz78 (OP)
Newbie
*
Offline Offline

Activity: 21
Merit: 0


View Profile WWW
December 07, 2013, 09:47:33 PM
 #77

Some questions to devs please: what's the chance to see it implemented in Bitcoin? Would it take very long? What's the biggest obstacle?

It would need to be post-pruning to prevent DoS attacks.

EDIT: But to be clear, it gives ~0 benefit until you hard-fork to significantly increase the transaction rate.

maaku, can you provide a link to how Bitcoin would prevent similar DoS attacks (without checkpoints that is)?

I'd like to address that critique, but I don't know how it's solved in Bitcoin.
iddo
Sr. Member
****
Offline Offline

Activity: 360
Merit: 251


View Profile
December 07, 2013, 10:09:29 PM
 #78

2) This is a local-only change that does not require consensus. It's okay for light nodes to still follow the most-work chain. What this change does is provide miners specifically with a higher chance of ending up on the final chain, by better estimating which fork has the most hash power behind it.

Not sure why you call it "local-only change", because as we are discussing here, it appears that there's a risk for netsplits that don't re-converge, especially if different nodes will follow different rules.
There is no possibility for non-convergence. The most-work tree eventually wins out. Always.

There could be an extreme scenario where two fractions of the network work on different trees, and each fraction uses anti-DoS rule that rejects blocks from the tree of the other fraction unless they arrive in a batch that proves that the PoW of the competing tree wins, but because of propagation lag each fraction solves more PoW blocks in its own tree so by the time the batch arrives from the other tree, that batch is already losing and therefore rejected. No?
avivz78 (OP)
Newbie
*
Offline Offline

Activity: 21
Merit: 0


View Profile WWW
December 07, 2013, 10:41:14 PM
 #79

There is no possibility for non-convergence. The most-work tree eventually wins out. Always.

There could be an extreme scenario where two fractions of the network work on different trees, and each fraction uses anti-DoS rule that rejects blocks from the tree of the other fraction unless they arrive in a batch that proves that the PoW of the competing tree wins, but because of propagation lag each fraction solves more PoW blocks in its own tree so by the time the batch arrives from the other tree, that batch is already losing and therefore rejected. No?

Iddo, a collapse has to occur. This is exactly the same reasoning as in Theorem 8.6 in the paper: block construction is stochastic, so the two forks are not synchronized perfectly even if they have exactly the same amount of hash power supporting them.  The differences in block creation times drift apart until they grow larger than the time it takes to send the message about the total weight in the branch. Thus, a collapse eventually occurs.

With anti-Dos it's highly unlikely that you get into these sort of forks in the first place: you do accept all blocks created, say, in the past month or so (difficulty is high enough that they are not really a DoS attack). A fork that breaks DoS rules will only form if 1) an attacker manages to keep up with the network's rate for a whole month (has to be a 50% attack) 2) the network is disconnected for over a month. Even in these cases, the forks resolve rather quickly.

Besides, another heuristic can also help here: accept alternative branches into the tree even if they have 80% of the total weight of your current branch. This is still enough to stop a DoS, and certainly enough to make DoS forks collapse instantly.
iddo
Sr. Member
****
Offline Offline

Activity: 360
Merit: 251


View Profile
December 07, 2013, 11:29:45 PM
 #80

There is no possibility for non-convergence. The most-work tree eventually wins out. Always.

There could be an extreme scenario where two fractions of the network work on different trees, and each fraction uses anti-DoS rule that rejects blocks from the tree of the other fraction unless they arrive in a batch that proves that the PoW of the competing tree wins, but because of propagation lag each fraction solves more PoW blocks in its own tree so by the time the batch arrives from the other tree, that batch is already losing and therefore rejected. No?

Iddo, a collapse has to occur. This is exactly the same reasoning as in Theorem 8.6 in the paper: block construction is stochastic, so the two forks are not synchronized perfectly even if they have exactly the same amount of hash power supporting them.  The differences in block creation times drift apart until they grow larger than the time it takes to send the message about the total weight in the branch. Thus, a collapse eventually occurs.

Right, apologies, I agree that this will eventually re-converge. But there still could be an interesting question here regarding whether it takes a long time for the two fractions to re-converge, depending on the anti-DoS rules that they deploy.

With anti-Dos it's highly unlikely that you get into these sort of forks in the first place: you do accept all blocks created, say, in the past month or so (difficulty is high enough that they are not really a DoS attack). A fork that breaks DoS rules will only form if 1) an attacker manages to keep up with the network's rate for a whole month (has to be a 50% attack) 2) the network is disconnected for over a month. Even in these cases, the forks resolve rather quickly.

Besides, another heuristic can also help here: accept alternative branches into the tree even if they have 80% of the total weight of your current branch. This is still enough to stop a DoS, and certainly enough to make DoS forks collapse instantly.

Yes, those can be sensible anti-DoS rules. But if the node accepts all blocks in the past month, we may also need a rule to delete the old blocks in the losing subtrees and thereby reclaim disk space. As Andrew Miller said in #bitcoin-wizards, in terms of incentives there's a tradeoff here since keeping some orphans around will potentially save on future bandwidth, but at the cost of storage now.
Pages: « 1 2 3 [4] 5 6 7 8 9 »  All
  Print  
 
Jump to:  

Powered by MySQL Powered by PHP Powered by SMF 1.1.19 | SMF © 2006-2009, Simple Machines Valid XHTML 1.0! Valid CSS!