Bitcoin Forum
May 13, 2024, 08:49:28 AM *
News: Latest Bitcoin Core release: 27.0 [Torrent]
 
   Home   Help Search Login Register More  
Pages: « 1 2 3 4 5 6 [7] 8 9 »  All
  Print  
Author Topic: New paper: Accelerating Bitcoin's Trasaction Processing  (Read 36281 times)
maaku
Legendary
*
expert
Offline Offline

Activity: 905
Merit: 1011


View Profile
December 12, 2013, 06:23:13 PM
 #121

There is no stable, secure way to manage interest rates that cannot be trivially gamed by participants, including that proposal you linked to. You're not the first to consider this problem. If you're of the opinion that some inflation/interest is desireable, you might be intersted in Freicoin. But this is wandering far afield from the topic.

I'm an independent developer working on bitcoin-core, making my living off community donations.
If you like my work, please consider donating yourself: 13snZ4ZyCzaL7358SmgvHGC9AxskqumNxP
"Your bitcoin is secured in a way that is physically impossible for others to access, no matter for what reason, no matter how good the excuse, no matter a majority of miners, no matter what." -- Greg Maxwell
Advertised sites are not endorsed by the Bitcoin Forum. They may be unsafe, untrustworthy, or illegal in your jurisdiction.
1715590168
Hero Member
*
Offline Offline

Posts: 1715590168

View Profile Personal Message (Offline)

Ignore
1715590168
Reply with quote  #2

1715590168
Report to moderator
eggdescrambler
Newbie
*
Offline Offline

Activity: 43
Merit: 0


View Profile WWW
December 13, 2013, 02:49:58 PM
 #122




Have you seen this paper?

 http://eprint.iacr.org/2012/248.pdf

It sounds like, with what is depicted in section 5.3, that once this double spending alert is implemetned, just waiting for 15 seconds for receiving a payment without any conflict would be sufficient to be confident, isn't it?
maaku
Legendary
*
expert
Offline Offline

Activity: 905
Merit: 1011


View Profile
December 13, 2013, 07:34:25 PM
 #123

No. That paper makes the flawed assumption that double-spends must be propagated on the network. On the contrary, someone executing a double spend could own or have some relationship with one or more large mining pools and send the double-spend to them directly. No one would know about the double spend until it is already confirmed in the next block.

I'm an independent developer working on bitcoin-core, making my living off community donations.
If you like my work, please consider donating yourself: 13snZ4ZyCzaL7358SmgvHGC9AxskqumNxP
eggdescrambler
Newbie
*
Offline Offline

Activity: 43
Merit: 0


View Profile WWW
December 13, 2013, 09:53:48 PM
 #124

Ok, I see what you mean, only those large pool would know about the double spend.
So this large pool would need to be in bed with the double spender.


No. That paper makes the flawed assumption that double-spends must be propagated on the network. On the contrary, someone executing a double spend could own or have some relationship with one or more large mining pools and send the double-spend to them directly. No one would know about the double spend until it is already confirmed in the next block.
eggdescrambler
Newbie
*
Offline Offline

Activity: 43
Merit: 0


View Profile WWW
December 13, 2013, 10:55:58 PM
 #125


How important is that threat? I mean, how large is the largest pool and how much share of the HASH computing power does it have to sort of attempt to quantify the probability of this threat?

Ok, I see what you mean, only those large pool would know about the double spend.
So this large pool would need to be in bed with the double spender.


No. That paper makes the flawed assumption that double-spends must be propagated on the network. On the contrary, someone executing a double spend could own or have some relationship with one or more large mining pools and send the double-spend to them directly. No one would know about the double spend until it is already confirmed in the next block.
maaku
Legendary
*
expert
Offline Offline

Activity: 905
Merit: 1011


View Profile
December 14, 2013, 02:08:47 AM
 #126

The largest pools are about 30% of the network. That would give them a 1-in-3 chance of pulling it off, which is more than enough for some scenarios.

EDIT: The paper also assumes that once one transaction has been seen, a node will always reject a double-spending transaction. This is also false. Nothing prevents a self-serving operator from discarding the initial transaction and accepting the double-spend (because it has a higher fee, for example).

I'm an independent developer working on bitcoin-core, making my living off community donations.
If you like my work, please consider donating yourself: 13snZ4ZyCzaL7358SmgvHGC9AxskqumNxP
eggdescrambler
Newbie
*
Offline Offline

Activity: 43
Merit: 0


View Profile WWW
December 15, 2013, 10:28:56 PM
 #127


Thanks

Still, having this double spend alert would reduce the current threat, making the attack much more difficult to achieve.
What's the hold up in waiting to add this double spend alert?

Other than this paper that started this thread, what are the other possible that are being investigated to improve bitcoin's transaction speed?

Thanks very much for your reply.


The largest pools are about 30% of the network. That would give them a 1-in-3 chance of pulling it off, which is more than enough for some scenarios.

EDIT: The paper also assumes that once one transaction has been seen, a node will always reject a double-spending transaction. This is also false. Nothing prevents a self-serving operator from discarding the initial transaction and accepting the double-spend (because it has a higher fee, for example).
billotronic
Legendary
*
Offline Offline

Activity: 1610
Merit: 1000


Crackpot Idealist


View Profile
December 15, 2013, 11:38:57 PM
 #128

Yes. There is things hiding in the dark... us shrills were actually the team of 'second gunmen' on the Grassy Knoll.

There its open. ha.

This post sums up why all this bullshit is a scam
Read It. Hate It. Change the facts that it represents.
https://bitcointalk.org/index.php?topic=1606638.msg16139644#msg16139644
eggdescrambler
Newbie
*
Offline Offline

Activity: 43
Merit: 0


View Profile WWW
December 18, 2013, 12:40:35 AM
 #129

Yes, very sad

I would love to find out what's holding up bitcoin coders from integrating this double spending alert which they talk about in section 8.5 ?
http://eprint.iacr.org/2012/248.pdf
Not that it would resolve the issue of compromised mining pool, but still, it will significantly reduce the issue.


We need a moderator to step in and remove all this offtopic.

Or at least split the topic.

gmaxwell, do it please.
iddo
Sr. Member
****
Offline Offline

Activity: 360
Merit: 251


View Profile
December 18, 2013, 09:40:32 AM
Last edit: December 18, 2013, 02:09:52 PM by iddo
 #130

F(prev_utxo_tree_hash, transaction set for current block) = utxo_tree_hash

I meant that it'd probably have less complexity to do: F(prev_utxo_tree_hash, transaction set for current block, utxo_tree_hash) == 1

It's a great idea, but what does it got to do with zero-knowledge?

I suppose it's partially zero-knowledge as you wouldn't need all of the inputs to the program. I did read somewhere that the SNARK programs can have public inputs as well as arbitrary inputs which are unknown to verifiers. Is that right? Some details would be hidden to the verifier, such as the transactions.

Your attitude is too cavalier... Just because you weren't supposed to know the non-public inputs doesn't mean that those inputs don't leak when you examine the SNARK proof, you need a zero-knowledge proof to guarantee that nothing leaks besides the output.
But for the last time, why do you care whether the proof is zero-knowledge or not in this context?
Etlase2
Hero Member
*****
Offline Offline

Activity: 798
Merit: 1000


View Profile
December 18, 2013, 02:41:12 PM
 #131

Your attitude is too cavalier... Just because you weren't supposed to know the non-public inputs doesn't mean that those inputs don't leak when you examine the SNARK proof, you need a zero-knowledge proof to guarantee that nothing leaks besides the output.
But for the last time, why do you care whether the proof is zero-knowledge or not in this context?

I think the distinction of must-not-have and not-necessarily-need of ZKP vs. SNARK is being lost here. Probably not necessary to keep asking this question.

eduffield
Legendary
*
Offline Offline

Activity: 1176
Merit: 1036


Dash Developer


View Profile WWW
December 18, 2013, 03:52:19 PM
 #132

Is there an implementation of this that someone is working on? I'd like to help implement it

Dash - Digital Cash | dash.org | dashfoundation.io | dashgo.io
avivz78 (OP)
Newbie
*
Offline Offline

Activity: 21
Merit: 0


View Profile WWW
December 18, 2013, 04:07:28 PM
 #133

Is there an implementation of this that someone is working on? I'd like to help implement it

We'd be happy to cooperate with anyone who wants to have a go at an implementation. While we have poked around in the bitcoin source code a bit, I'm afraid we're not familiar enough with the code to pull this off easily ourselves.

I'm also very interested in establishing a testnet that will be stress tested with many transactions per second. If you have an idea on how to do this, I'd be happy to know.

Feel free to email me directly regarding any of these projects.
LurbQBurdock
Newbie
*
Offline Offline

Activity: 24
Merit: 0



View Profile
December 18, 2013, 05:51:54 PM
Last edit: December 18, 2013, 06:47:29 PM by LurbQBurdock
 #134

I've been reading through the bitcoin source code.  I'm interested in working with anyone who tries to implement this.  Here's what I've found:

All the relevant code seems to be in main.cpp and main.h.

As currently implemented, we can traverse the blocktree from leaves->root, but Aviv and Yonatan's algorithm requires us to be able to traverse the blocktree from root->leaves.  The first thing that needs to be done is that we need to rewrite the tree to make it traversable in both directions.

After this, the code to determine which block is the "BestBlock" to append a new child block to needs to be pretty much completely rewritten.  The code currently computes the ChainWork (the amount of work needed to compute the current block and all blocks above the current block on the tree) of each block in the tree.  It then searches through the entirety of (the still valid parts of) the tree for the block that has the largest ChainWork, and it assigns this block as the BestBlock.  Instead, we need to traverse the tree from root->leaves, and compare the work in each subtree whenever we come to a fork.

Also, the code to invalidate and discard blocks on the tree needs to be rewritten.  I'm lost on how this works and how it should be rewritten.
TierNolan
Legendary
*
Offline Offline

Activity: 1232
Merit: 1083


View Profile
December 18, 2013, 06:04:40 PM
 #135

The way the client calculates the proof of work for each block is that it does the calculation only once.

Effectively, each block has a number which means "Proof of work for the chain, if this block was the leaf".

It also has a PoW for best chain variable.  If you add a block and it is higher than the best chain, then it has to set that chain as the best.

Under your system, when a new block is added, it would be more complex.

Code:
A <- B <- C  <- D
     \ <- C' <- D'
          \  <- D''

When comparing to D' to D, D' would get extra weight from D''.  However, when comparing D' to D'', it wouldn't get the extra weight.  There isn't one "work" number for each node.

1LxbG5cKXzTwZg9mjL3gaRE835uNQEteWF
avivz78 (OP)
Newbie
*
Offline Offline

Activity: 21
Merit: 0


View Profile WWW
December 18, 2013, 09:22:58 PM
 #136

I've been reading through the bitcoin source code.  I'm interested in working with anyone who tries to implement this.  Here's what I've found:

All the relevant code seems to be in main.cpp and main.h.

As currently implemented, we can traverse the blocktree from leaves->root, but Aviv and Yonatan's algorithm requires us to be able to traverse the blocktree from root->leaves.  The first thing that needs to be done is that we need to rewrite the tree to make it traversable in both directions.

After this, the code to determine which block is the "BestBlock" to append a new child block to needs to be pretty much completely rewritten.  The code currently computes the ChainWork (the amount of work needed to compute the current block and all blocks above the current block on the tree) of each block in the tree.  It then searches through the entirety of (the still valid parts of) the tree for the block that has the largest ChainWork, and it assigns this block as the BestBlock.  Instead, we need to traverse the tree from root->leaves, and compare the work in each subtree whenever we come to a fork.

Hi, thanks for taking a look! I think it's a good idea to map out all the changes that need to take place.

Comments:

1) I might be wrong about this, because I haven't looked at the implementation too carefully, but in principle, I think forward and backward traversal of the chain should already be possible: as far as I'm aware, the unspent transaction output set needs to be rolled back if a recent part of the chain is replaced. If I'm not mistaken (and somebody please correct me if I am) the unspent outputs are only maintained for the "best block" and if this is replaced by an alternate chain, it isn't built from scratch.

2) There is also a need to add hashes (or other identifiers) of newly discovered off-chain blocks into each block ("newly discovered" here means: not included inside any other previously reachable block). Nodes must then request such blocks if they do not already have them, and the propagation mechanisms / inv messages need to be adjusted accordingly.

Also, the code to invalidate and discard blocks on the tree needs to be rewritten.  I'm lost on how this works and how it should be rewritten.

3) I'm not sure what you mean by "the code to invalidate and discard blocks". Can you explain in more detail?

avivz78 (OP)
Newbie
*
Offline Offline

Activity: 21
Merit: 0


View Profile WWW
December 18, 2013, 09:44:19 PM
 #137

The way the client calculates the proof of work for each block is that it does the calculation only once.

Effectively, each block has a number which means "Proof of work for the chain, if this block was the leaf".

It also has a PoW for best chain variable.  If you add a block and it is higher than the best chain, then it has to set that chain as the best.

Under your system, when a new block is added, it would be more complex.

Code:
A <- B <- C  <- D
     \ <- C' <- D'
          \  <- D''

When comparing to D' to D, D' would get extra weight from D''.  However, when comparing D' to D'', it wouldn't get the extra weight.  There isn't one "work" number for each node.

You are right, it may not be as simple as just keeping the one number of "proof for the chain as if block was a leaf" but in principle the following approach works (and its refinement below is probably much more efficient):

Approach 1 (very naive):
1) For each block, save the total amount of work in its subtree. Let us denote this saved value by W(X)=total work of subtree rooted at block X.

In your example above: the subtree of block C includes blocks C,D. The subtree of block C' includes C',D',D''.
2) When a new block arrives, simply increment W(X) for all blocks X along the path to the genesis by the amount of work represented by this new block.
3) If the new block was off-chain, find its connection to the main chain and check if a change to the chain is needed.

(the 2nd step is computationally expensive so the following refinement can be done)

Approach 2 (Lazy evaluation):
The idea here is to still maintain values for total work in the subtree rooted at each block, but to do the updates only when required.
For each block on the main chain we maintain a lower bound on the work in its subtree, for each node off-chain we maintain an exact number. The idea being that off-chain branches die out after a relatively short while, and on-chain blocks that are sufficiently deep in the tree do not need to be examined often.

Invariants: W(X) for any block X that is off chain is exact, W(X) for any block X that is on the main chain is a lower-bound on the real value.

The updates:

When a new block is added:
1) If it extends the current chain, do not update W(X) for any block.
2) Otherwise, if it is off-chain then traverse its path (towards the genesis) until it meets the main chain, and increment the work W(X) of each subtree along the way accordingly. Also traverse the main chain up to this point and update W(X) for each node along the way.
3) check to see if a change of chain is needed. If it is, the off-chain values are exact and can be used to find the new best leaf by following the greedy path towards the leaves.

I hope I explained this well enough for you to at least get the basic idea... Please let me know if I didn't, or if you spot a mistake.
LurbQBurdock
Newbie
*
Offline Offline

Activity: 24
Merit: 0



View Profile
December 18, 2013, 10:43:06 PM
 #138

1) I might be wrong about this, because I haven't looked at the implementation too carefully, but in principle, I think forward and backward traversal of the chain should already be possible: as far as I'm aware, the unspent transaction output set needs to be rolled back if a recent part of the chain is replaced. If I'm not mistaken (and somebody please correct me if I am) the unspent outputs are only maintained for the "best block" and if this is replaced by an alternate chain, it isn't built from scratch.
class CBlockIndex has pointers pprev and pnext that point to the parent block and the child along the main chain.  The main chain can be traversed forward and backward, but the entirety of the tree cannot.  I see no reason why this can't be changed.

2) There is also a need to add hashes (or other identifiers) of newly discovered off-chain blocks into each block ("newly discovered" here means: not included inside any other previously reachable block). Nodes must then request such blocks if they do not already have them, and the propagation mechanisms / inv messages need to be adjusted accordingly.
Yes, this would also need to be done in order to implement your proof-of-work difficulty-adjustment algorithm, right?  But it's not required for GHOST?  I have not started looking into how to implement this.  I am worried about the bloat that this would cause if the entire tree must be stored though.  Have you looked into how your algorithm would fare if we ignored off-chain blocks for difficulty-adjustment?

3) I'm not sure what you mean by "the code to invalidate and discard blocks". Can you explain in more detail?

I was originally under the impression that the parts of the blocktree that are not on the main chain get deleted from memory, but now I see that what I was looking at was code to invalidate blocks if the header is not correct.  I am not sure if the entire tree is kept in memory or if parts are eventually discarded.

By the way, I'm waiting to hear back from my friends before I email you back.

avivz78 (OP)
Newbie
*
Offline Offline

Activity: 21
Merit: 0


View Profile WWW
December 18, 2013, 11:52:34 PM
 #139

2) There is also a need to add hashes (or other identifiers) of newly discovered off-chain blocks into each block ("newly discovered" here means: not included inside any other previously reachable block). Nodes must then request such blocks if they do not already have them, and the propagation mechanisms / inv messages need to be adjusted accordingly.
Yes, this would also need to be done in order to implement your proof-of-work difficulty-adjustment algorithm, right?  But it's not required for GHOST?  I have not started looking into how to implement this.  I am worried about the bloat that this would cause if the entire tree must be stored though.  Have you looked into how your algorithm would fare if we ignored off-chain blocks for difficulty-adjustment?

This is actually required for GHOST, so that nodes have the same world-view and decide on the same leaf -- the off-chain blocks affect your choice of leaf. It is especially important for nodes that were offline for a while and want to catch up. You may in theory be able to do without it, but the convergence time to the same chain would take longer.

Regarding bloat: This is not a difficult issue. First, there is only potential bloat once propagation delays are significantly higher than block creation rates, but more importantly,
The contents of off-chain blocks can be discarded after a short while (e.g., after 1 day). Only the header is needed to verify the difficulty was met. If a switch is needed to some previously discarded subtree, then the node that triggers it, can supply the contents of the blocks as well.
LurbQBurdock
Newbie
*
Offline Offline

Activity: 24
Merit: 0



View Profile
December 19, 2013, 12:54:47 AM
Last edit: December 19, 2013, 03:14:09 AM by LurbQBurdock
 #140

Regarding bloat: This is not a difficult issue. First, there is only potential bloat once propagation delays are significantly higher than block creation rates, but more importantly,
The contents of off-chain blocks can be discarded after a short while (e.g., after 1 day). Only the header is needed to verify the difficulty was met. If a switch is needed to some previously discarded subtree, then the node that triggers it, can supply the contents of the blocks as well.
 OK.  This works for me.
Pages: « 1 2 3 4 5 6 [7] 8 9 »  All
  Print  
 
Jump to:  

Powered by MySQL Powered by PHP Powered by SMF 1.1.19 | SMF © 2006-2009, Simple Machines Valid XHTML 1.0! Valid CSS!