Bitcoin Forum
March 29, 2024, 07:57:12 AM *
News: Latest Bitcoin Core release: 26.0 [Torrent]
 
   Home   Help Search Login Register More  
Pages: 1 2 3 4 5 6 7 8 9 [All]
  Print  
Author Topic: New paper: Accelerating Bitcoin's Trasaction Processing  (Read 36268 times)
avivz78 (OP)
Newbie
*
Offline Offline

Activity: 21
Merit: 0


View Profile WWW
December 06, 2013, 08:54:27 AM
Last edit: December 07, 2013, 06:09:17 PM by avivz78
 #1

Hi all,

I'm a researcher at the Hebrew University in Israel, and I've been working with a student (Yonatan Sompolinsky) on a Bitcoin related paper. We have really exciting results that we are eager to share with the Bitcoin community. We would really like to get constructive feedback on our work from the many smart people in the community, which is why I am posting here.

Here is a link to the full paper: http://www.cs.huji.ac.il/~avivz/pubs/13/btc_scalability_full.pdf
Title: Accelerating Bitcoin's Transaction Processing (Fast Money Grows on Trees, Not Chains)

Edit: Thanks for all the good feedback! Recap of main issues and questions added below.

As the paper is quite lengthy, and is written for an academic audience (which, sadly, is not usually familiar with the protocol) we thought it best to also provide a quick explanation of our results aimed at Bitcoiners:

tl;dr:  We suggest a protocol modification to the block chain that securely allows blocks to be generated around once per second, can handle over 200 transactions per second at these rates, and consumes under 0.5 MBps in terms of bandwidth (less at lower rates than 200 TPS). All of this with no increased susceptability to 50% attacks. This essentially solves the problem that caused Satoshi to set the 10 minute target for the block creation rate. We also analyze the number of transactions per second Bitcoin can handle with and without our modification. We note that block propagation times are the primary obstacle for scalability.


A more detailed explanation follows bellow. The primary goal of our research is to address Bitcoin's ability to process transactions quickly, and in large volumes. Here are our main findings:

Scalibility, Delays & Security:


We begin our paper by examining the exact effects of high transaction rates on Bitcoin's security (Following in the footsteps of previous work by Decker and Wattenhofer). The number of transactions per second (TPS) that Bitcoin can handle is limited by two main things: 1) The block creation rate (of 1 block every 10 minutes) and 2) the block size limit (currently at a 1MB default). These two parameters combine to limit the number of transactions per second that Bitcoin can process. The straightforward way to increase the TPS is to either increase the block size, or to increase the block creation rate. Both of these changes are controversial, and for good reason: both may affect the security guarantees of the protocol. First, let us consider an increase in the number of blocks per second (e.g., Litecoin's blocks that are created every 2.5 minutes, or even Fastcoin with its extreme 12 second blocks). Because blocks are created quickly, many conflicting blocks are created. Most will end up off the blockchain. The same symptom occurs when the block size is increased: large blocks take longer to propagate through the network (due to bandwidth limitations) and blocks that are created in the meantime are likely built on top of older blocks, i.e., they will be wasted.

The fact that many blocks are wasted lowers the security of the network and makes it more susceptible to 50% attacks. For example, if half the blocks are wasted in this manner, the network essentially wastes half its hash power on blocks that do not contribute confirmations to transactions. An attacker which is centralized and has no delays, can execute a so-called 50% attack with only slightly more than 33% of the hash power. This is because it can easily create longer chains than the rest of the network (botnets that still suffer from the effect of internal delays are less effective than centralized attackers).
Using different techniques, we analyze how many blocks end up in the chain, and how many are discarded, and use this to estimate the change in security for different parameters. Among other results, we show that transmitting blocks that contain only transaction hashes (instead of full transaction records) will greatly help scalability (i.e., this is not just a 2-fold saving in bandwidth, but rather a 16-fold increase in the number of transactions per second!).

Our suggested protocol modification (which appears in section 8 of our paper):

Since high transaction rates imply many conflicting blocks are created, it would be quite useful if these blocks were not really lost. In fact, each block can be seen as supporting not just transactions inside it, but also those embedded in previous blocks. Even if a block is not in the main chain,  we can still count the confirmations it gives previous blocks as valid. This is the basis of our proposed modification, which we call the "Greedy Heaviest-Observed Sub-Tree" chain selection rule.

Roughly speaking, since each block contains a hash of its predecessor, all blocks form a tree structure, rooted at the Genesis Block. Bitcoin currently selects the accepted history as the longest (or rather heaviest) chain in the tree. We suggest another approach: At each fork, pick the sub-tree containing the most blocks (or rather the blocks with greatest combined difficulty). Do this repeatedly until reaching a leaf. The path traversed is the chain that nodes should accept. But how does this help? Notice now, that an attacker that wishes to change the main-chain selected by the algorithm needs to make us change our decision in one of the forks. To do so, he needs to build more blocks than are contained in the entire sub-tree (and not just more blocks than are contained in the longest chain!).

Here is the pseudo-code of the GHOST chain selection rule:

1. SET B <- Genesis Block.
2. IF B has no successors: RETURN(B).
   Else: SET B <- Child of B with heaviest sub-tree.
3. GOTO 2

The cost of such a modification: At low block creation rates, and small block sizes there is almost no difference between the longest-chain rule and the GHOST-rule. There is no cost. Both are almost identical since the longest chain in these cases is also the heaviest subtree. In high transaction rates, GHOST builds slightly less blocks in its main-chain (because it doesn't always extend the longest chain), thus slightly lowering the number of transactions accepted per second, but it does so more securely! Delays and many off-chain blocks no longer make it more susceptible to 50% attacks. This implies that we can increase the block creation rates and block size to levels that were previously too risky and easily make up for the loss in transaction volumes. In fact, we estimate that 1 second blocks can be easily combined with rates of over 200 TPS.  This implies quite fast authorization times, but much more importantly, an increased granularity in confirmations. Even 1 confirmation gives some level of certainty regarding irreversibly, and it comes almost instantly when blocks are generated every second.

Since Bitcoin's security relies primarily on the number of confirmations received instead of on elapsed time, we end up getting irreversibility of transactions with very high probability in far less than 10 minutes.  

Recap of main issues and questions raised:
I'd like to clarify one thing: We do not claim to be able to solve every problem one might think of with regards to Bitcoin's various aspects (incentives, mining centralization, etc.) so I think we are criticized for problems that are already there in the protocol. Our main claim is that security gets very bad if high transaction rates occur, and the GHOST modification fixes that (and that alone). If you are uncomfortable with 1 second blocks, the modification is equally valuable at lower block rates. The need to address high transaction rates is still there.

Some specific comments:

1) Advantage of nodes with low latency. Nodes that can reach the rest of the network quickly have more blocks on the main chain and less orphans.

Answer: This is a serious problem that the current bitcoin protocol will face if high transaction rates (i.e., large block sizes) are sent through the system. Strong miners will be able to get more than their proportional share of the blocks. We don't improve things, but the GHOST modification doesn't hurt either. The only improvement that we offer, is that the 50% attack does not get worse in these scenarios.

2) DDOS attack: Malicious nodes can mine blocks above the genesis block at difficulty 1 and spam the network

Answer: This too as a problem that the current bitcoin protocol faces. It's why checkpoints are constantly put in place. (anyone can currently claim to have a chain with harder proofs of work, and just start sending difficulty 1 blocks in a long chain that never ends. See here for gmaxwell's explanation:
https://bitcointalk.org/index.php?topic=194078.msg2014204#msg2014204.
Checkpoints can also be applied to our solution.

Edit: several people have noted that there are other solutions besides checkpoints (not currently implemented). We'll need to see if these can be adapted here as well. I currently have no reason to believe this won't be possible.

3) SPV clients need more bandwidth + storage if there are blocks every second (given that there will be 600 times as many block headers to download).

Answer: This is true. But the fix should probably be to find better mechanisms for the SPV nodes. For example, it should be possible to probabilistically verify a long chain of block headers without actually downloading all of them. Here's a basic idea: Have the full node build a merkle tree of all blocks +difficulties, and send it to the SPV client. The SPV client picks blocks at random from the chain (weighed by the difficulty the should have) and requests merkle branches to them (from the root it had sent) + checks their difficulty. just a few checks would be enough to verify that there is indeed enough work stored in the chain (with very high probability).

4) Implemenation: How would nodes know about orphaned blocks if they were offline for a while?

Answer: We suggest that each block will also contain hashes of off-chain (orphaned) blocks in its header (only those not written in blocks it built upon). This way, the main chain also contains a record of all off-chain blocks, so it knows what to ask for.  Another reason we think it is a good idea: We think difficulty should be retargeted according to the total number of blocks built per second (incl. orphans), not based on the length of the longest chain (retargetting for that is harder at high rates).

5) Storage issue: a lot of space will be needed

Answer: Solutions like the mini-blockchain are still applicable, and allow you to keep only a record of recent events. Still there is no getting around the fact that high transaction rates will require a lot of storage (It's not easy scaling to the size of Paypal / VISA). Again, this is not caused by our modification. You do not need to keep the contents of orphaned blocks -- only their headers to verify difficulty. Having more blocks in the chain per second merely implies each one of them will have fewer transactions inside it. The only added cost is keeping more headers.
1711699032
Hero Member
*
Offline Offline

Posts: 1711699032

View Profile Personal Message (Offline)

Ignore
1711699032
Reply with quote  #2

1711699032
Report to moderator
Advertised sites are not endorsed by the Bitcoin Forum. They may be unsafe, untrustworthy, or illegal in your jurisdiction.
stephwen
Member
**
Offline Offline

Activity: 83
Merit: 10


View Profile
December 06, 2013, 09:25:39 AM
 #2

Hi Aviv,

Has your paper already been peer-reviewed?
If so, in which journal has it been published?
trout
Sr. Member
****
Offline Offline

Activity: 333
Merit: 251


View Profile
December 06, 2013, 09:29:25 AM
 #3

what about block rewards and tx fees? Whom do they go to?

if it's still only to the heaviest branch then there's still an incentive to create
smaller blocks that propagate faster, not including any transactions.
avivz78 (OP)
Newbie
*
Offline Offline

Activity: 21
Merit: 0


View Profile WWW
December 06, 2013, 09:36:56 AM
 #4

stephwen: We've submitted it, but it is still unpublished.

trout: The protocol still ends up picking a main chain, and fees + rewards still go to these blocks.
It may be interesting to reward off-chain blocks as well, but we didn't find a nice way to do this without ruining the money creation schedule.
tutkarz
Hero Member
*****
Offline Offline

Activity: 546
Merit: 500


View Profile
December 06, 2013, 09:46:00 AM
 #5

did you look by chance on to how to reduce blockchain size especially when there will be more transactions if your solutions would be implemented?

avivz78 (OP)
Newbie
*
Offline Offline

Activity: 21
Merit: 0


View Profile WWW
December 06, 2013, 09:48:42 AM
 #6

did you look by chance on to how to reduce blockchain size especially when there will be more transactions if your solutions would be implemented?

Solutions that have been proposed in the past like the mini-blockchain idea still work in this case as well.
Luckybit
Hero Member
*****
Offline Offline

Activity: 714
Merit: 510



View Profile
December 06, 2013, 09:57:04 AM
 #7

Brilliant solution. I really hope developers pick this up.
mmitech
Legendary
*
Offline Offline

Activity: 1148
Merit: 1001


things you own end up owning you


View Profile
December 06, 2013, 10:24:28 AM
 #8

I didn't read all pages (yet), but I think that you did a great job there, did you send this to the dev team ?
Peter Todd
Legendary
*
expert
Offline Offline

Activity: 1120
Merit: 1147


View Profile
December 06, 2013, 10:26:21 AM
 #9

I'm going to have to sit down and read the paper more carefully to have a solid opinion on it, but my meta-opinion is that I think it's great to see people taking scalability and blocksize seriously on an academic level. We've got some incredibly naive ideas floating around the Bitcoin space right now - like the idea that just removing the blocksize limit entirely will work out fine - and we need research into scalability solutions that considers incentives and the resulting security carefully. That kind of reasoning tends to involve a lot of math, rather than platitudes about "market forces"


Speaking of, while the paper presents a solution preserving security guarantees, a quick skim of it doesn't seem to indicate they take into account the incentives around block propagation. If you wind up with a situation well large, centralized, mining pools earn more money as a part of this high-speed propagation game, even though in theory all the work being done contributes towards 51% security, the overall result may be a serious negative due to the incentives towards centralization. Lately I've done some work (pdf) on that topic; it's a very important crypto-currency design consideration that I'd like to see other people analyzing as well.

EhVedadoOAnonimato
Hero Member
*****
Offline Offline

Activity: 630
Merit: 500



View Profile
December 06, 2013, 10:30:01 AM
 #10

OP, I haven't read your academic paper, but I'm already interested by the summary you just made.

Could you tell me if your solution has anything to do with what is proposed by Maged in this thread: https://bitcointalk.org/index.php?topic=57647.0;all ?

A block-tree could eventually render freezing attacks more difficult. Such advantage alone already makes it interesting. If as a plus it also improves performance significantly, then it's definitely something that should be studied more seriously.
tholenst
Newbie
*
Offline Offline

Activity: 24
Merit: 0


View Profile
December 06, 2013, 10:31:59 AM
 #11

From what I understand, the main idea of the paper is the change of selecting the "valid" blockchain.

Currently, when a new block arrives, figuring out whether the current valid blockchain changes costs O(1) processing time: for each block you need to remember what the cost of the chain is until this block. Right now, in your scheme, I do not see a way to improve on Theta(depth) processing time for each block which comes in. Just wondering: do you see a better way to do this? I wouldn't be surprised if an O(log(depth)) data structure exists.
Daily Anarchist
Hero Member
*****
Offline Offline

Activity: 614
Merit: 500



View Profile WWW
December 06, 2013, 10:35:38 AM
 #12

I'm going to have to sit down and read the paper more carefully to have a solid opinion on it, but my meta-opinion is that I think it's great to see people taking scalability and blocksize seriously on an academic level. We've got some incredibly naive ideas floating around the Bitcoin space right now - like the idea that just removing the blocksize limit entirely will work out fine - and we need research into scalability solutions that considers incentives and the resulting security carefully. That kind of reasoning tends to involve a lot of math, rather than platitudes about "market forces"


Speaking of, while the paper presents a solution preserving security guarantees, a quick skim of it doesn't seem to indicate they take into account the incentives around block propagation. If you wind up with a situation well large, centralized, mining pools earn more money as a part of this high-speed propagation game, even though in theory all the work being done contributes towards 51% security, the overall result may be a serious negative due to the incentives towards centralization. Lately I've done some work (pdf) on that topic; it's a very important crypto-currency design consideration that I'd like to see other people analyzing as well.

I'm not a developer, so excuse my ignorance. But wouldn't 1 second blocks essentially make solo mining much more plausible? It seems at that rate even lowly cpu miners could pull of solo mining. Might not be profitable per se, but the chances of at least mining a block solo would be greatly increased.

Discover anarcho-capitalism today!
avivz78 (OP)
Newbie
*
Offline Offline

Activity: 21
Merit: 0


View Profile WWW
December 06, 2013, 10:39:00 AM
 #13

Speaking of, while the paper presents a solution preserving security guarantees, a quick skim of it doesn't seem to indicate they take into account the incentives around block propagation. If you wind up with a situation well large, centralized, mining pools earn more money as a part of this high-speed propagation game, even though in theory all the work being done contributes towards 51% security, the overall result may be a serious negative due to the incentives towards centralization. Lately I've done some work (pdf) on that topic; it's a very important crypto-currency design consideration that I'd like to see other people analyzing as well.

You make a really good point. Decentralization is hurt at higher transaction rates, and better incentives are needed. We do mention it as a problem that remains at the very end of the paper (issues related to decentralization). The problem seems inherent to both our suggestion and to Bitcoin's current implementation.

Let me just say, that with small block sizes (i.e., when full bloom filters are implemented) it will be very hard to distribute blocks faster than the network does already. IMO, this is less of a problem than people think.
Peter Todd
Legendary
*
expert
Offline Offline

Activity: 1120
Merit: 1147


View Profile
December 06, 2013, 10:42:06 AM
 #14

I'm going to have to sit down and read the paper more carefully to have a solid opinion on it, but my meta-opinion is that I think it's great to see people taking scalability and blocksize seriously on an academic level. We've got some incredibly naive ideas floating around the Bitcoin space right now - like the idea that just removing the blocksize limit entirely will work out fine - and we need research into scalability solutions that considers incentives and the resulting security carefully. That kind of reasoning tends to involve a lot of math, rather than platitudes about "market forces"


Speaking of, while the paper presents a solution preserving security guarantees, a quick skim of it doesn't seem to indicate they take into account the incentives around block propagation. If you wind up with a situation well large, centralized, mining pools earn more money as a part of this high-speed propagation game, even though in theory all the work being done contributes towards 51% security, the overall result may be a serious negative due to the incentives towards centralization. Lately I've done some work (pdf) on that topic; it's a very important crypto-currency design consideration that I'd like to see other people analyzing as well.

I'm not a developer, so excuse my ignorance. But wouldn't 1 second blocks essentially make solo mining much more plausible? It seems at that rate even lowly cpu miners could pull of solo mining. Might not be profitable per se, but the chances of at least mining a block solo would be greatly increased.

Yeah, but if the result of the solution is such that solo-mining earns you only 50% of the revenue that hashing in a large, 25% hashing power pool got you, you're incentive would be to mine in the pool instead leading to centralization. The problem is we know that pools already have higher revenue per unit hashing power; my suspicion is the solution proposed by the paper makes that problem even worse. But I've got a flight to catch in an hour or two and will get a chance to think about it all more carefully later. Smiley

caveden
Legendary
*
Offline Offline

Activity: 1106
Merit: 1004



View Profile
December 06, 2013, 10:51:38 AM
 #15

We've got some incredibly naive ideas floating around the Bitcoin space right now - like the idea that just removing the blocksize limit entirely will work out fine - and we need research into scalability solutions that considers incentives and the resulting security carefully. That kind of reasoning tends to involve a lot of math, rather than platitudes about "market forces"

Wow... This presumption of knowledge of yours reminded me of this brilliant quote, perfectly applicable here:

Quote from: Friedrich von Hayek
The curious task of economics is to demonstrate to men how little they really know about what they imagine they can design.

The subjective reasoning of millions can't involve "lots of math", and can't be modeled by it. You can only reach meaningful conclusions through praxeology. And, if you were to actually try that, you'd probably recognize one day that you should not try to impose top-down rulings instead of organic bottom-up organizations, as if people were peons on a chessboard to be manipulated by enlightened "planners". Anyways... that's not the topic here, I don't want to derail this very interesting thread, so I'll stop here. It would be nice if you could also try not to let insults on the air like this every time you can...
Peter Todd
Legendary
*
expert
Offline Offline

Activity: 1120
Merit: 1147


View Profile
December 06, 2013, 10:51:59 AM
 #16

Speaking of, while the paper presents a solution preserving security guarantees, a quick skim of it doesn't seem to indicate they take into account the incentives around block propagation. If you wind up with a situation well large, centralized, mining pools earn more money as a part of this high-speed propagation game, even though in theory all the work being done contributes towards 51% security, the overall result may be a serious negative due to the incentives towards centralization. Lately I've done some work (pdf) on that topic; it's a very important crypto-currency design consideration that I'd like to see other people analyzing as well.

You make a really good point. Decentralization is hurt at higher transaction rates, and better incentives are needed. We do mention it as a problem that remains at the very end of the paper (issues related to decentralization). The problem seems inherent to both our suggestion and to Bitcoin's current implementation.

Let me just say, that with small block sizes (i.e., when full bloom filters are implemented) it will be very hard to distribute blocks faster than the network does already. IMO, this is less of a problem than people think.

Oh good to hear your thinking about this! I agree with you that there aren't easy answers yet.

Something to keep in mind is that the network distributing blocks != miners distributing blocks. Large miners and mining pools can and do peer to each other directly, so propagation delays on the global P2P bitcoin network don't affect them the same way as smaller miners who are not in a position to do that. When the block "interval" is getting down to just a few seconds, even stuff like physically locating your mining pool hashing power closer to other concentrations of hashing power may net you more revenue, with obvious problems for decentralization. You might have a small mining setup, and want to contribute that decentralized hashing power, but find you can't profitably because all the big mining pools already have peering agreements with each other and dedicated high-speed connections.

Also I'm not convinced that bloom filters are the way to do block distribution with only txid hashes; peers simply maintaining lists of what transactions each peer knows about and transmitting short tags instead is probably more efficient in terms of bandwidth as it is not probabilistic.

Finally, I've got a proposal for a blockchain where blocks themselves could only contain tx hashes at a fundamental level; transactions don't get verified at all in the scheme and the blockchain is strictly for proof-of-publication. Seems to me that it'd be well-suited to your idea as the lack of verification makes it easy to have actual tx publication be something done after the fact. (though note my comments about issues when there is a lack of incentives to actually publish the data that is being committed)

molecular
Donator
Legendary
*
Offline Offline

Activity: 2772
Merit: 1019



View Profile
December 06, 2013, 11:00:56 AM
 #17

Hi Aviv,

Has your paper already been peer-reviewed?

We are peer-reviewing it now ;-). No better review mechanisms than a btctalk thread.

Firstly: it's awesome to see such research.

Now, is litecoin too conservative to try to incorporate this?

I think yes, so let's make TreeCoin?


PGP key molecular F9B70769 fingerprint 9CDD C0D3 20F8 279F 6BE0  3F39 FC49 2362 F9B7 0769
Peter Todd
Legendary
*
expert
Offline Offline

Activity: 1120
Merit: 1147


View Profile
December 06, 2013, 11:07:56 AM
 #18

Hi Aviv,

Has your paper already been peer-reviewed?

We are peer-reviewing it now ;-). No better review mechanisms than a btctalk thread.

+1


Firstly: it's awesome to see such research.

Now, is litecoin too conservative to try to incorporate this?

I think yes, so let's make TreeCoin?

Note how this idea can easily be implemented as a merge-mined coin, and if merge-mined with Bitcoin, that can naturally lead towards an eventual "pure" Bitcoin implementation; I'd recommend considering using the Bitcoin UTXO set as the basis for the coin's initial distribution. Of course, until the merge-mined chain gets >50% hashing power of Bitcoin as a whole it's in theory insecure, but that's IMO fine for an experiment that may lead to something concrete. (do it merge-mined with testnet first)

Daily Anarchist
Hero Member
*****
Offline Offline

Activity: 614
Merit: 500



View Profile WWW
December 06, 2013, 11:10:31 AM
 #19

Hi Aviv,

Has your paper already been peer-reviewed?

We are peer-reviewing it now ;-). No better review mechanisms than a btctalk thread.

Firstly: it's awesome to see such research.

Now, is litecoin too conservative to try to incorporate this?

I think yes, so let's make TreeCoin?



One of the reasons none of the altcoins have ever appealed to me is their lame ass names. No offense, but I can't see the whole world using something called Treecoins.

Would it be possible to implement this directly from the Bitcoin blockchain, creating a split? Where the current holders of bitcoins now are also the holders of Bitcoin 2.0? And then people will be vested in both with a market between the two, eventually finding out which one "wins."

Discover anarcho-capitalism today!
molecular
Donator
Legendary
*
Offline Offline

Activity: 2772
Merit: 1019



View Profile
December 06, 2013, 11:12:07 AM
 #20

I didn't read the paper yet, but I have some problems even understanding the basic concept.

If the idea is to also accept transaction in orphaned blocks (or forks), then how can one even be sure all nodes are aware of the transactions contained in the orphans? With the current longest-chain scheme this is easy: I can just walk back through all the blocks and get a list of all transaction which is exactly the list of all valid transactions. With the proposed change I don't see how I can get a provably complete list of all valid transactions.

Please someone enlighten me.

PGP key molecular F9B70769 fingerprint 9CDD C0D3 20F8 279F 6BE0  3F39 FC49 2362 F9B7 0769
molecular
Donator
Legendary
*
Offline Offline

Activity: 2772
Merit: 1019



View Profile
December 06, 2013, 11:13:33 AM
 #21

Hi Aviv,

Has your paper already been peer-reviewed?

We are peer-reviewing it now ;-). No better review mechanisms than a btctalk thread.

Firstly: it's awesome to see such research.

Now, is litecoin too conservative to try to incorporate this?

I think yes, so let's make TreeCoin?



One of the reasons none of the altcoins have ever appealed to me is their lame ass names. No offense, but I can't see the whole world using something called Treecoins.

Would it be possible to implement this directly from the Bitcoin blockchain, creating a split? Where the current holders of bitcoins now are also the holders of Bitcoin 2.0? And then people will be vested in both with a market between the two, eventually finding out which one "wins."

This is how I would go about it when making an altcoin, too. But for most altcoiners the appeal is that they can be early adopters with their new coin. You don't have that with the bitcoin 2.0 approach.

PGP key molecular F9B70769 fingerprint 9CDD C0D3 20F8 279F 6BE0  3F39 FC49 2362 F9B7 0769
Voodah
Sr. Member
****
Offline Offline

Activity: 266
Merit: 250



View Profile
December 06, 2013, 11:42:13 AM
 #22

Hi Aviv,

Has your paper already been peer-reviewed?

We are peer-reviewing it now ;-). No better review mechanisms than a btctalk thread.

Firstly: it's awesome to see such research.

Now, is litecoin too conservative to try to incorporate this?

I think yes, so let's make TreeCoin?



One of the reasons none of the altcoins have ever appealed to me is their lame ass names. No offense, but I can't see the whole world using something called Treecoins.

Would it be possible to implement this directly from the Bitcoin blockchain, creating a split? Where the current holders of bitcoins now are also the holders of Bitcoin 2.0? And then people will be vested in both with a market between the two, eventually finding out which one "wins."

This is how I would go about it when making an altcoin, too. But for most altcoiners the appeal is that they can be early adopters with their new coin. You don't have that with the bitcoin 2.0 approach.


Yeah, but this is for the good of Bitcoin, not for the altcoiners.

Still, I'm not so sure how good it would be to fragment the Bitcoin economy.

If the peer reviewing goes well, it seems to me that this is more of a candidate for the dev team to work on, than an alt-coin.
hamiltino
Hero Member
*****
Offline Offline

Activity: 644
Merit: 500


P2P The Planet!


View Profile
December 06, 2013, 12:01:43 PM
 #23

You guys have made the biggest contribution to the biggest problem with cryptocurrencies, transaction times.


Now we just need to implement it + zerocoin support. Ahh how i love open source.

If the bitcoin devs sit on their ass and don't look into it, you guys should make your own altcoin with the innovation, Prove it works to the world.

stacking coin
amincd
Hero Member
*****
Offline Offline

Activity: 772
Merit: 501


View Profile
December 06, 2013, 12:17:03 PM
Last edit: December 06, 2013, 11:51:45 PM by amincd
 #24

This is incredibly serendipitous. I began writing a paper in Google Docs this week, titled:

"Disposable, non-orphaning, merge-mined blockchains for rapid transaction confirmation"

The idea is to have a parallel merge-mined mini-block 'grove' (since there could be more than one tree) to determine which transaction is chosen in the event of a double spend.

I'll describe it in case it can contribute to the discussion.

Disposable, non-orphaning, merge-mined blockchains for rapid transaction confirmation

The proposal is to have parallel chains of mini-blocks (3 seconds) that begin at every new 'real block' (10 minutes), and rather than orphaning forks of these mini-block chains with lower difficulty, nodes save them along with the greatest difficulty chain.

This parallel 'tree' or 'grove' of mini-block chains would be merge-mined with the main blockchain, to conserve hashing power, and would give a higher resolution picture of when a transaction was first propagated into the network.

Upon the creation of a new 10-minute block, nodes would check if any of the transactions contained in it (in-block transactions) have a double spend in the parallel mini-block chains that has at least 10 more mini-blocks backing it than backing the in-block transaction. The mini-blocks counted toward this comparison can be in any of the chains, including in lower difficulty forks. If the block does contain such a transaction, then it is invalid and not accepted by other nodes.

Under this rule, a transaction could be confirmed in an average of 30 seconds. Once a particular transaction is in a mini-block with 10 mini-blocks (in any chain) built on top of it, with no double spends detected, a person can be confident that the transaction cannot be replaced by a double spent transaction.

There are two further rules required to make this work:

1) Require that a new real block contain all transactions with at least 10 mini-blocks backing them. Without such a rule, mining nodes can simply refuse to include any transactions into the main block they're hashing on, or refuse to include those transactions which they've detected a double spend attempt on, to avoid the risk of a block they generate being invalid according to the mini-blockchain rule and orphaned.

To avoid an attacker being able to take advantage of this rule to bloat the blockchain with zero or low-fee transactions, the rule can be qualified with the '>=10 mini-block backed transactions that blocks are required to include' being only those with a fee to kb ratio that is 10 percent greater than the median fee to kb ratio of all fee-containing transactions over the last 100 real blocks.

The list of transactions with fees that would be used to determine the median transaction fee to kb ratio would be weighed by the 'bitcoin days destroyed' of each transaction, to prevent manipulation of the median through generation of a large number of small value, low input age, transactions.

2) Nodes refuse to build on mini-block chains that contain a transaction that conflicts (contains a double spend) with any transaction that has at least 3 mini-blocks backing it. This would ensure that any double spend transaction that is generated more than an average of 9 seconds after the initial transaction will have very few mini-blocks built on top of it, virtually guaranteeing the first transaction will be included in the next real block if enough time transpires until the new block is generated for 7 more mini-blocks to be generated.

Edit, a couple more thoughts:

  • Using these rules, there could be cases when there is a network split due to a block being accepted by some nodes, and rejected by others, due to it containing a double spend and there existing conflicting views of how many mini-blocks back the original transaction (one set of nodes could have a local view showing the first transaction out backs the second by 9 mini-blocks, and another could have a local view showing the first transaction out-backing the second by the required 10 mini-blocks to be secure from a double spend). In this case, an additional rule, of nodes accepting a new block, even if it contains an 'illegal' double spend, if the number of mini-blocks backing the new block exceeds the number of mini-blocks generated on top of the old block after the fork, by a particular number (e.g. 5 blocks), can be used to resolve network split
  • An additional rule could be that new blocks must publish in their headers the 'median transaction fee to kb ratio' for the last 100 blocks. This would allow SPV clients to operate with lower trust requirements, since they would know just from the block headers what transaction fee they need to include with their transaction to guarantee it will be secure from a double spend attempt at/after 10 mini-blocks

The strengths of this proposal are:

Like the GHOST protocol additions proposed in the linked paper, this provides faster confirmations without a rise in orphans caused by propagation latency reducing the effective network hashrate, as results from simply shortening block times.

There would be no extra sets of block headers in the blockchain as would be required with more frequent blocks.

It's a soft-fork that is backward compatible with non-mining nodes that don't upgrade.
karma9
Newbie
*
Offline Offline

Activity: 13
Merit: 0


View Profile
December 06, 2013, 12:27:49 PM
 #25

i will be carefully monitoring this technology; if it proves to be successful on paper and test runs, and bitcoin developers fail to implement this, i'll be investing in alt-coin which uses it.
avivz78 (OP)
Newbie
*
Offline Offline

Activity: 21
Merit: 0


View Profile WWW
December 06, 2013, 12:29:23 PM
 #26

I didn't read the paper yet, but I have some problems even understanding the basic concept.

If the idea is to also accept transaction in orphaned blocks (or forks), then how can one even be sure all nodes are aware of the transactions contained in the orphans? With the current longest-chain scheme this is easy: I can just walk back through all the blocks and get a list of all transaction which is exactly the list of all valid transactions. With the proposed change I don't see how I can get a provably complete list of all valid transactions.

Please someone enlighten me.


The idea is to still accept transactions only from a single chain, but to allow blocks off the chain to also contribute confirmations to the part of the chain they reference.
Think of it this way:
Each block gives 1 confirmation to the block it references directly, and to all preceding blocks.
If there is a fork at level h, and you need to decide which one to consider as the right path to take, simply pick the block with the most confirmations (rather than the block that supports the longest chain).


RE altcoins based on this improvement:

1) I think it's more important to see a testnet that is stress-tested with large transaction loads. This is something worth doing that others have mentioned on countless occasions.

2) IMO, if crypto-currencies succeed as we hope they do (and Bitcoin is probably the most likely to grow to large sizes at this point) they will end up with high transaction rates. So we think eventually something will have to be done. Half of our paper is dedicated to what happens to Bitcoin without our protocol improvement. There is room for growth there, but we worry that after a while security becomes an issue (the 50% attack is easier, and you need to wait more blocks for the same level of certainty when facing weaker attackers).
hashman
Legendary
*
Offline Offline

Activity: 1264
Merit: 1008


View Profile
December 06, 2013, 12:57:30 PM
 #27

Thank you!

One thing that makes it a little hard for me to follow your notation is that I am unclear on the definitions of beta and lamda.  Beta is the rate of block additions to the "main chain" and lamda is the rate of block additions which have some potential to wind up in the main chain if they are accepted.  At least, that is my reading of it. 

However, there is no point in which a block becomes 100% canonized in the main chain (well, other than checkpointing).  It is therefore hard to make a rigorous definition of beta.  After all, the block chain is a probabilistic not a deterministic solution to the distributed consensus problem.  It seems that the difference between lamda and beta is also called the orphan rate, a term you choose to not use, possibly because you are afraid people will accuse you  of suggesting we put orphans to work in the mines Wink 

Another comment is that some of the maths you develop could also be useful in analysis of shares in a mining pool.         
avivz78 (OP)
Newbie
*
Offline Offline

Activity: 21
Merit: 0


View Profile WWW
December 06, 2013, 01:07:06 PM
 #28

Thank you!

One thing that makes it a little hard for me to follow your notation is that I am unclear on the definitions of beta and lamda.  Beta is the rate of block additions to the "main chain" and lamda is the rate of block additions which have some potential to wind up in the main chain if they are accepted.  At least, that is my reading of it.  

However, there is no point in which a block becomes 100% canonized in the main chain (well, other than checkpointing).  It is therefore hard to make a rigorous definition of beta.  After all, the block chain is a probabilistic not a deterministic solution to the distributed consensus problem.  It seems that the difference between lamda and beta is also called the orphan rate, a term you choose to not use, possibly because you are afraid people will accuse you  of suggesting we put orphans to work in the mines Wink  

You are right, the chain is never really fixed.  But think of it this way: under the standard protocol, a chain only gets replaced by a longer one. We thus only look at the length of the longest chain (ignoring which one it actually is) and check the rate of events that make it grow (this could include a switch to a different chain).

Another comment is that some of the maths you develop could also be useful in analysis of shares in a mining pool.        

Thanks! That's an interesting comment that we did not consider. I suppose you refer to issues related to stale shares?
jaked
Member
**
Offline Offline

Activity: 60
Merit: 10


View Profile
December 06, 2013, 01:10:55 PM
 #29

A typo in the paper caught my eye:
"As of November 2014, it is valued.."

You're obviously referring to 2013.
Mike Hearn
Legendary
*
expert
Offline Offline

Activity: 1526
Merit: 1128


View Profile
December 06, 2013, 01:21:28 PM
 #30

Copying my repsonse from an email thread:


I really like this paper. It's nice to see a strong theoretical grounding for what some of the rather arbitrary choices in Bitcoin could be.

One thing I note is that there are lots of ways to optimise block wire representations. Sending blocks as lists of hashes, for example, would use 32 byte hashes in our current code just because it's lazily reusing the Bloom filtering path. But 32 bytes is massive overkill for this. You only need to distinguish between transactions in the mempool. You could easily drop to 16, 8 or even 4 byte hashes (hash prefixes) and still be able to disambiguate what the peers block message contains very reliably. Indeed they could be varint encoded so the hash is only as long as it needs to be given current traffic loads. Malicious actors might try to create collisions to force block propagation times up, but if peers negotiate random byte subsets upon connect for relaying of block contents then such an attack becomes impossible.

Given the way the maths works out, fairly simple optimisations like that can keep block messages small even with very large amounts of traffic and a 10 minute propagation time. So the findings in this paper seem pretty reassuring to me. There is so much low hanging fruit for optimising block propagation.

Re: SPV wallets. I'm not sure what kind of probabilistic verification you have in mind. The cost of processing headers is primarily in downloading and storing them. Checking they are correct and the PoWs are correct isn't that expensive. So it's not really obvious to me how to do probabilistic verification. In fact it can be that again smarter encodings just make this problem go away. The prev block hash does not need to be written to the wire as 32 bytes. It can be simply however many bytes are required to disambiguate from the set of possible chain heads. When seeking forward using getheaders there is only ever one possible previous block header, so the largest component of the 80 bytes can be eliminated entirely. Likewise, given the rarity of block header version transitions, the redundant repetition of the version bytes can also be excluded in getheader responses. Those two optimisations together can nearly halve the header size for the most common wire cases, allowing a doubling of the block rate with no disk/bandwidth impact on SPV clients.
trout
Sr. Member
****
Offline Offline

Activity: 333
Merit: 251


View Profile
December 06, 2013, 01:24:52 PM
 #31

There's something in the paper about SPV clients: "These nodes will face a
larger number of block headers to download."
Can you give an estimate of how much larger this would be?
mission
Newbie
*
Offline Offline

Activity: 21
Merit: 0


View Profile
December 06, 2013, 01:28:19 PM
 #32

I love you!!!
phelix
Legendary
*
Offline Offline

Activity: 1708
Merit: 1019



View Profile
December 06, 2013, 01:37:49 PM
Last edit: December 06, 2013, 02:14:50 PM by phelix
 #33

Great idea. We need an altcoin implementing it. Bounty: I pledge 10BTM

edit: "Heaviest branch selection"
michaelmclees
Hero Member
*****
Offline Offline

Activity: 633
Merit: 500


View Profile
December 06, 2013, 01:41:38 PM
 #34

For merchants wishing for nearly instant security, why not simply have all the large pools sign each valid transaction and send the batch of signatures out to the merchant?

The pools would be verifying that they will include the transaction in their next block, should they find it.  Suppose the merchant is basically OK with 0 confirm purchases for small things, but would like a little more security if he could get it.  He signs up with what we'll call "The Service" and The Service has contracted with the 10 largest pool operators.  For all incoming BTC to the merchants addresses, The Service sends a confirmation request to all the pools.

Will you be accepting this transaction in your next block?

They all say "YES" and depending on the threshold the merchant is willing to live with, perhaps he wants 8 of 10 to immediately say YES, he gets his verification in the form of signatures or no replies.  Double spends are prevented because all the pools already said YES to one or the other, so on merchant is getting perhaps 8 Yes, 1 No, and 1 No Reply.  The other merchant is getting 8 No, 1 Yes, and 1 No Reply.

All this happens before the next block is found.

If a pool says Yes but fails to include the transaction, or says No, but includes it anyway, the pool is either on the hook for the money OR they lose credibility and marketshare.
amincd
Hero Member
*****
Offline Offline

Activity: 772
Merit: 501


View Profile
December 06, 2013, 01:45:06 PM
 #35

^ This is trust-based, when what's needed is a protocol rule that makes this determination automated.
michaelmclees
Hero Member
*****
Offline Offline

Activity: 633
Merit: 500


View Profile
December 06, 2013, 02:07:58 PM
 #36

Fair enough.  People fear protocol changes though, so this may be a political issue.
TierNolan
Legendary
*
Offline Offline

Activity: 1232
Merit: 1077


View Profile
December 06, 2013, 02:56:12 PM
 #37

Since high transaction rates imply many conflicting blocks are created, it would be quite useful if these blocks were not really lost. In fact, each block can be seen as supporting not just transactions inside it, but also those embedded in previous blocks. Even if a block is not in the main chain,  we can still count the confirmations it gives previous blocks as valid. This is the basis of our proposed modification, which we call the "Greedy Heaviest-Observed Sub-Tree" chain selection rule.

Roughly speaking, since each block contains a hash of its predecessor, all blocks form a tree structure, rooted at the Genesis Block. Bitcoin currently selects the accepted history as the longest (or rather heaviest) chain in the tree. We suggest another approach: At each fork, pick the sub-tree containing the most blocks (or rather the blocks with greatest combined difficulty).

What is nice about this is that it is backwards compatible.

For blocks that are off the main chain, you only need to send the headers.  This could be made part of the headers first system.  Each node would download the entire header tree first and then download the full blocks.

Quote
In high transaction rates, GHOST builds slightly less blocks in its main-chain (because it doesn't always extend the longest chain), thus slightly lowering the number of transactions accepted per second, but it does so more securely!

The difficulty update would have to be the main chain length, so the block rate would remain the same.

Quote
In fact, we estimate that 1 second blocks can be easily combined with rates of over 200 TPS.

A faster block rate mainly helps with confirmation time rather than transactions per second.  Your node still needs to process the entire block to make sure all the transactions are valid.

A much higher block rate (with lots of blocks thrown away) would mean that nodes would end up verifying multiple blocks.

Quote
This implies quite fast authorization times, but much more importantly, an increased granularity in confirmations. Even 1 confirmation gives some level of certainty regarding irreversibly, and it comes almost instantly when blocks are generated every second.

Since Bitcoin's security relies primarily on the number of confirmations received instead of on elapsed time, we end up getting irreversibility of transactions with very high probability in far less than 10 minutes.  

It depends on the hash value too.  Your transaction needs to have hashing power built on it that is much more valuable than the transaction.  The block rate determines protection against random reversal.

1LxbG5cKXzTwZg9mjL3gaRE835uNQEteWF
AsymmetricInformation
Member
**
Offline Offline

Activity: 115
Merit: 10


View Profile WWW
December 06, 2013, 03:18:03 PM
 #38

Quote
The basic observation behind the protocol modification that we suggest, is that blocks that are off the main chain can still contribute to a chain’s irreversibility. Consider for example a block B, and two blocks that were created on top of it C1 and C2, i.e., parent(C1) =parent(C2)= B. The Bitcoin protocol, at its current form, will eventually adopt only one of the sub-chains rooted at C 1 and C 2, and will discard the other. Note however, that both blocks were created by nodes that have accepted block B and its entire history as correct. The heaviest sub-tree protocol we suggest makes use of this fact, and adds additional weight to block B, helping to ensure that it will be part of the main chain.

Cool!

Support Decentralized Bitcoin Prediction Markets: 1M5tVTtynuqiS7Goq8hbh5UBcxLaa5XQb8
https://github.com/psztorc/Truthcoin
superresistant
Legendary
*
Offline Offline

Activity: 2128
Merit: 1120



View Profile
December 06, 2013, 03:24:50 PM
 #39

Yes please do something.
gglon
Member
**
Offline Offline

Activity: 64
Merit: 10


View Profile
December 06, 2013, 03:26:33 PM
 #40

How about splitting the reward equally among all blocks at level h? The exact tx paying the miners would be just included x levels later.
AsymmetricInformation
Member
**
Offline Offline

Activity: 115
Merit: 10


View Profile WWW
December 06, 2013, 03:31:11 PM
 #41

Would it be possible to implement this directly from the Bitcoin blockchain, creating a split? Where the current holders of bitcoins now are also the holders of Bitcoin 2.0? And then people will be vested in both with a market between the two, eventually finding out which one "wins."
This is how I would go about it when making an altcoin, too. But for most altcoiners the appeal is that they can be early adopters with their new coin. You don't have that with the bitcoin 2.0 approach.

Still, I'm not so sure how good it would be to fragment the Bitcoin economy.

Just fragment it once, on say Feb 1, 2014, to test it out and see how everything is working, with the complete understanding of everyone that, if the 'treecoin' implementation works, you will 'fragment' again on March 1st, 2014, with the intent of hard-forking THAT into new Bitcoin.

Therefore, passive users can coordinate to use Bitcoin Chain, knowing that their BTC will convert to Bitcoin Tree on March 1st if all is well, and not worry about any messy stuff in between. Those using Treecoin between Feb and March will get them for free but know that they will eventually lose them, making for a perfect test environment and no financial instability. Best of both worlds.

Support Decentralized Bitcoin Prediction Markets: 1M5tVTtynuqiS7Goq8hbh5UBcxLaa5XQb8
https://github.com/psztorc/Truthcoin
nomailing
Full Member
***
Offline Offline

Activity: 126
Merit: 100


View Profile
December 06, 2013, 04:24:16 PM
 #42

Would it be possible to implement this directly from the Bitcoin blockchain, creating a split? Where the current holders of bitcoins now are also the holders of Bitcoin 2.0? And then people will be vested in both with a market between the two, eventually finding out which one "wins."
This is how I would go about it when making an altcoin, too. But for most altcoiners the appeal is that they can be early adopters with their new coin. You don't have that with the bitcoin 2.0 approach.

Still, I'm not so sure how good it would be to fragment the Bitcoin economy.

Just fragment it once, on say Feb 1, 2014, to test it out and see how everything is working, with the complete understanding of everyone that, if the 'treecoin' implementation works, you will 'fragment' again on March 1st, 2014, with the intent of hard-forking THAT into new Bitcoin.

Therefore, passive users can coordinate to use Bitcoin Chain, knowing that their BTC will convert to Bitcoin Tree on March 1st if all is well, and not worry about any messy stuff in between. Those using Treecoin between Feb and March will get them for free but know that they will eventually lose them, making for a perfect test environment and no financial instability. Best of both worlds.

This is a very good suggestion, how a hardfork should be implemented.

BM-2D9KqQQ9Fg864YKia8Yz2VTtcUPYFnHVBR
Natanael
Newbie
*
Offline Offline

Activity: 27
Merit: 0


View Profile WWW
December 06, 2013, 05:34:42 PM
 #43

Are anybody interested in a notary based system with temporary pay-to-script-hash wallets (P2SH) using multisig transactions?

My scheme requires no Bitcoin protocol change, and requires very limited trust in notaries which can be replaced quickly and where malice is easy to prove (just show two conflicting transactions that both got signed, these can be distributed globally in seconds to everybody that has set their clients to trust that notary). This means that notaries has practically nothing at all to gain on an attack, and still makes 0-confirmation transactions trustable.

http://roamingaroundatrandom.wordpress.com/2013/11/30/bitcoin-idea-temporary-notarized-wallets-secure-zero-confirmation-payments-using-temporary-notarized-p2sh-multisignature-wallets/
jl2012
Legendary
*
Offline Offline

Activity: 1792
Merit: 1087


View Profile
December 06, 2013, 06:23:18 PM
 #44

I am not sure if I get it correct, but I think we can implement it like this:

1. Let say there are two miners, Alice and Bob, finding blocks of the same height at almost the same time.

2. All miners see both blocks. However, some see Alice's block first, some see Bob's block first.

3. Those seeing Alice's block first will by default mine on top of Alice's block. However, they will also include Bob's block header in the coinbase. (The hash for the previous block could be omitted to save some space)

4. Those seeing Bob's block will similarly put Alice's block header in the coinbase.

5. No matter what the next block is building on, the header of the orphaned block is permanently and irreversibly recorded in the blockchain.

6. The total PoW of the chain should also count the orphaned blocks, given that they satisfy the difficulty. The actual content of the orphaned block is irrelevant since it is only used as a confirmation to the previous block.

There is a drawback: In case there is a fork, SPV nodes have to read the coinbases in order to determine the longest chain. With 1 second blocks that would mean tons of branches and SPV nodes may have some trouble.

I think it is quite workable. We may even implement it without changing the rules first, and see how it works out. It would help strengthening the network even if we keep the 10 minutes block

Donation address: 374iXxS4BuqFHsEwwxUuH3nvJ69Y7Hqur3 (Bitcoin ONLY)
LRDGENPLYrcTRssGoZrsCT1hngaH3BVkM4 (LTC)
PGP: D3CC 1772 8600 5BB8 FF67 3294 C524 2A1A B393 6517
Cryddit
Legendary
*
Offline Offline

Activity: 924
Merit: 1122


View Profile
December 06, 2013, 06:44:24 PM
 #45

How about FOUR THOUSAND (or even FORTY THOUSAND) transactions per second?

This is so cool!  I had come up with a scalability solution of my own earlier this week, and this one MULTIPLIES by it!  

A fundamental change in protocol design has been exactly what Bitcoin needs to continue to be viable as it grows.  This will do it.  Actually either of these will do it.  But combining them is AWESOME!

By combining these two ideas we could get THOUSANDS of transactions per second.  

My scalability solution was to split wallets into multiple "channels" and have one blockchain for each pair of channels.  So when you get to the point where 6 channels (15 blockchains) is having trouble coping with transaction volume, you can shift to 7 channels (21 blockchains).  Every client would be listening on one channel (5 blockchains before the additional channel is added; 6 blockchains afterward), and each channel would contain a total of 1/6 (or 1/7) the total tx volume.  By the time you get up to 20 channels (190 blockchains) you could handle a thousand tx per second even with Bitcoin's current speed.

You'd need one additional blockchain (probably with an extremely low volume of transactions) which would be in all  channels, to handle the rare tx that involves wallets in *more* than 2 different channels (all of them much more complicated than the standard pay-to-script-hash). So each single-channel client would actually be listening to 6 (or 7, or 20) blockchains - but the last one nearly negligible in bandwidth and volume.  

These blockchains would back each other up; every time a new block appears in any blockchain it would list the hashes of the most recent block so far found in all other blockchains.  The mining awards would simply be split between blockchains.  Thus all blockchains would be "merge mined" together.

One downside is that any particular transaction would only be checked by the clients on two channels rather than by all clients.   IE, all users would be relying on tx checked by 1/3 (or 2/7, or 1/10) of total users, so it's no longer a completely trustless model; you'd be trusting a few thousand other users not to collude to deceive you, rather than trusting that ALL of them would not collude to deceive you.

Upsides:

It would be easier to "throttle" bandwidth without refusing altogether to participate, simply by limiting the number of blockchains in which you advertise full node (or "download this blockchain from me") service.  Bandwidth costs could even be made much less onerous than that; see the second point below.

It would be easier to run a fully participating node for one channel.  It would use only 1/6 or 1/7 the total bandwidth and tx checking CPU load, and only 1/6 or 1/7 (or far less; see the next point) the storage, of Bitcoin's single blockchain architecture, spread across all blockchains in your channel).  

There is a huge privacy and bandwidth enhancement:  The length of a particular blockchain could be limited. You could at any time transfer all of a blockchain's UTXO to a different blockchain (and everybody could check that you did it correctly), shut down the now-empty blockchain, and start a new one to replace it.  Effectively this would amount to a very aggressive "pruning" strategy for releasing transaction history -- which would also be a huge contribution to financial privacy and a huge savings in bandwidth and storage.  At any given time, the blockchains in use might average only a week or so old.  Only one blockchain, probably the "all-channels chain" which would have a very light transaction history anyway, would need to stretch all the way back to the Genesis Block.  Of course this wouldn't help against an opponent who keeps copies of all historical blockchains, but it would preclude downloading all transactions in history from any full node, once the blockchain those tx were on has been pruned.

Any particular user who wants to would have the choice to support the protocol more fully by subscribing to multiple (or all) channels if they have enough bandwidth and storage to do so.

Now, what's awesome about this is that these two enhancements multiply by each other. If we can get *EACH BLOCKCHAIN* up to 200 tx per second, then multiplying the number of blockchains as outlined above would be many times as effective.  

With 7 channels, or 21(22) blockchains, we'd be up to over 4000 tx/second, rather than the 150 or so I had been envisioning. With 20 channels (190 blockchains) you could be approaching 40,000 tx/second rather than the 1400 or so I had been envisioning.  
nomailing
Full Member
***
Offline Offline

Activity: 126
Merit: 100


View Profile
December 06, 2013, 07:46:34 PM
 #46

How about FOUR THOUSAND (or even FORTY THOUSAND) transactions per second?

This is so cool!  I had come up with a scalability solution of my own earlier this week, and this one MULTIPLIES by it!  

A fundamental change in protocol design has been exactly what Bitcoin needs to continue to be viable as it grows.  This will do it.  Actually either of these will do it.  But combining them is AWESOME!

By combining these two ideas we could get THOUSANDS of transactions per second.  

My scalability solution was to split wallets into multiple "channels" and have one blockchain for each pair of channels.  So when you get to the point where 6 channels (15 blockchains) is having trouble coping with transaction volume, you can shift to 7 channels (21 blockchains).  Every client would be listening on one channel (5 blockchains before the additional channel is added; 6 blockchains afterward), and each channel would contain a total of 1/6 (or 1/7) the total tx volume.  By the time you get up to 20 channels (190 blockchains) you could handle a thousand tx per second even with Bitcoin's current speed.

You'd need one additional blockchain (probably with an extremely low volume of transactions) which would be in all  channels, to handle the rare tx that involves wallets in *more* than 2 different channels (all of them much more complicated than the standard pay-to-script-hash). So each single-channel client would actually be listening to 6 (or 7, or 20) blockchains - but the last one nearly negligible in bandwidth and volume.  

These blockchains would back each other up; every time a new block appears in any blockchain it would list the hashes of the most recent block so far found in all other blockchains.  The mining awards would simply be split between blockchains.  Thus all blockchains would be "merge mined" together.

One downside is that any particular transaction would only be checked by the clients on two channels rather than by all clients.   IE, all users would be relying on tx checked by 1/3 (or 2/7, or 1/10) of total users, so it's no longer a completely trustless model; you'd be trusting a few thousand other users not to collude to deceive you, rather than trusting that ALL of them would not collude to deceive you.

Upsides:

It would be easier to "throttle" bandwidth without refusing altogether to participate, simply by limiting the number of blockchains in which you advertise full node (or "download this blockchain from me") service.  Bandwidth costs could even be made much less onerous than that; see the second point below.

It would be easier to run a fully participating node for one channel.  It would use only 1/6 or 1/7 the total bandwidth and tx checking CPU load, and only 1/6 or 1/7 (or far less; see the next point) the storage, of Bitcoin's single blockchain architecture, spread across all blockchains in your channel).  

There is a huge privacy and bandwidth enhancement:  The length of a particular blockchain could be limited. You could at any time transfer all of a blockchain's UTXO to a different blockchain (and everybody could check that you did it correctly), shut down the now-empty blockchain, and start a new one to replace it.  Effectively this would amount to a very aggressive "pruning" strategy for releasing transaction history -- which would also be a huge contribution to financial privacy and a huge savings in bandwidth and storage.  At any given time, the blockchains in use might average only a week or so old.  Only one blockchain, probably the "all-channels chain" which would have a very light transaction history anyway, would need to stretch all the way back to the Genesis Block.  Of course this wouldn't help against an opponent who keeps copies of all historical blockchains, but it would preclude downloading all transactions in history from any full node, once the blockchain those tx were on has been pruned.

Any particular user who wants to would have the choice to support the protocol more fully by subscribing to multiple (or all) channels if they have enough bandwidth and storage to do so.

Now, what's awesome about this is that these two enhancements multiply by each other. If we can get *EACH BLOCKCHAIN* up to 200 tx per second, then multiplying the number of blockchains as outlined above would be many times as effective.  

With 7 channels, or 21(22) blockchains, we'd be up to over 4000 tx/second, rather than the 150 or so I had been envisioning. With 20 channels (190 blockchains) you could be approaching 40,000 tx/second rather than the 1400 or so I had been envisioning.  


This is a very nice idea. How would you make sure that all these blockchains have the same monetary base?

I thought into a similar direction.
It would be a good idea to order the channels according to geographic locations or according to different industries. For example you could have bitcoin addresses like these:
/btc/1CiZPrEJdN4FJcqdLdgVLzT8tgCXxT5ion
/btc/europe/1CiZPrEJdN4FJcqdLdgVLzT8tgCXxT5ion
/btc/europe/spain/1CiZPrEJdN4FJcqdLdgVLzT8tgCXxT5ion
/btc/usa/california/1CiZPrEJdN4FJcqdLdgVLzT8tgCXxT5ion
The child blockchains are represented as unique addresses within their parent blockchain. The top blockchains will have much less TPS because most transactions can be assumed to take place within the same geographic region and therefore take place within the same sub-chains.
The child and parent blocks would always link to each other to determine the transaction processing between different chains.
The total coin-generation could then be distributed from the main /btc chain to child chains according to their stake.
If a user wants to run a full node in chain x, then he has to run a full node in all parents of x and at least SPV nodes in the direct child chains of x.

When you would create a transaction from chain /btc/usa to chain /btc/europe it would go like this:
You submit a transaction to the subchain /btc/usa with a script that says "move coins to parent and then to address xyz in subchain europe".
The transaction is first processed by a miner in the subchain /btc/usa and included in a block. The parent chain /btc sees the transaction, because all nodes in the parent have to run at least an SPV in the child. Then the parent chain includes it in a block. Internally within the parent this is just recorded by transferring some coins between the monetary base of one subchain to the other subchain. Then the childchain /btc/europe sees the transaction in the block of the parent chain and includes it so that the coins finally arrive at the destination address.

Maybe your approach is better, because it would allow direct transactions between wallets in all different channels. Maybe it is also possible to combine your approach with a tree structure?

So for example you could have a parent blockchain and channel which is called /btc. This would manage the total money supply and the distribution of this money supply between all your child-channels.
Then there are 3 child channels (/btc/usa, /btc/europe, ...) and thus 9 blockchains which handle the transactions between these channels. You could further add child channels and thereby you reduce your total number of required blockchains from 190 to something more reasonable.

BM-2D9KqQQ9Fg864YKia8Yz2VTtcUPYFnHVBR
Cryddit
Legendary
*
Offline Offline

Activity: 924
Merit: 1122


View Profile
December 06, 2013, 08:08:08 PM
 #47

Well, you could assign channels to geographic regions or industries, but each blockchain is shared by 2 channels.  So if a client in europe (channel 3 let's say) wants to transmit money to a client in America (channel 2) they'd need to use blockchain 2.3, which is distributed on both continents.  For example with 6 channels you might have:

channel 1 blockchains: 1.2 ,1.3, 1.4, 1.5, 1.6  (asia)
channel 2 blockchains: 1.2, 2.3, 2.4, 2.5, 2.6  (north america)
channel 3 blockchains: 1.3, 2.3, 3.4, 3.5, 3.6  (europe)
channel 4 blockchains: 1.4, 2.4, 3.4, 4.5, 4.6  (south america)
channel 5 blockchains: 1.5, 2.5, 3.5, 4.5, 5.6  (africa)
channel 6 blockchains: 1.6, 2.6, 3.6, 4.6, 5.6  (australia)

and all channels share blockchain zero.

(which is a total of 15 blockchains, not 36)

But no blockchain would be restricted to just one continent, because every channel has one blockchain (besides zero) in common with every other channel.  The idea is that because most transactions involve just two people, you will very rarely need a blockchain that spans more than two channels.  blockchain zero is reserved for transactions involving more than two people.

Still, your idea is valid; someone in channel 3 could give you a bitcoin.europe.Gux8854.... payment link to click on. But there's no way to enforce such a breakdown; the guy with the channel 3 address might live in Australia.



nomailing
Full Member
***
Offline Offline

Activity: 126
Merit: 100


View Profile
December 06, 2013, 09:02:13 PM
 #48

Well, you could assign channels to geographic regions or industries, but each blockchain is shared by 2 channels.  So if a client in europe (channel 3 let's say) wants to transmit money to a client in America (channel 2) they'd need to use blockchain 2.3, which is distributed on both continents.  For example with 6 channels you might have:

channel 1 blockchains: 1.2 ,1.3, 1.4, 1.5, 1.6  (asia)
channel 2 blockchains: 1.2, 2.3, 2.4, 2.5, 2.6  (north america)
channel 3 blockchains: 1.3, 2.3, 3.4, 3.5, 3.6  (europe)
channel 4 blockchains: 1.4, 2.4, 3.4, 4.5, 4.6  (south america)
channel 5 blockchains: 1.5, 2.5, 3.5, 4.5, 5.6  (africa)
channel 6 blockchains: 1.6, 2.6, 3.6, 4.6, 5.6  (australia)

and all channels share blockchain zero.

(which is a total of 15 blockchains, not 36)

But no blockchain would be restricted to just one continent, because every channel has one blockchain (besides zero) in common with every other channel.  The idea is that because most transactions involve just two people, you will very rarely need a blockchain that spans more than two channels.  blockchain zero is reserved for transactions involving more than two people.

Still, your idea is valid; someone in channel 3 could give you a bitcoin.europe.Gux8854.... payment link to click on. But there's no way to enforce such a breakdown; the guy with the channel 3 address might live in Australia.

I thought it makes more sense to have the chains you described but in addition 6 chains individually for transactions that take place within the channel. So you would have blockchain 0 and 36 subchains:
channel 1 blockchains: 1, 1.2 ,1.3, 1.4, 1.5, 1.6  (asia)
channel 2 blockchains: 2, 1.2, 2.3, 2.4, 2.5, 2.6  (north america)
channel 3 blockchains: 3, 1.3, 2.3, 3.4, 3.5, 3.6  (europe)
channel 4 blockchains: 4, 1.4, 2.4, 3.4, 4.5, 4.6  (south america)
channel 5 blockchains: 5, 1.5, 2.5, 3.5, 4.5, 5.6  (africa)
channel 6 blockchains: 6, 1.6, 2.6, 3.6, 4.6, 5.6  (australia)

A transaction from channel europe to channel asia will only be recorded in blockchain 0 and blockchain 1.3.
A transaction from channel europe to channel europe will only be recorded in blockchain 0 and 3.

The header of subchain europe-asia will include a field that states the total transaction volume between channels europe to asia or vice versa.
Then channel 0 will include the header of the subchain and directly knows how many coins are within each individual channel.

In comparison to your proposal it has the advantage that someone in channel asia does not need to know a transaction which takes place from channel europe to channel europe. As I read your proposal everyone would have to record transactions which take place within another channel. Therefore I think it makes sense to have one chain per channel and in addition what you proposed a chain per pair of channels.

You can then repeat the procedure and create channels /europe/spain and /europe/italy, but now the channel /europe would act as the parent chain which just records the coin flow between the channels but not within the channels. With this tree structure it would be possible to have a lot of channels with a linear relationship between number of channels and number of chains. For example if the channels above would have each another 6 subchannels you have a total of 1+6+36 (top+second+third level) channels and in total only 1+36+216 chains.

BM-2D9KqQQ9Fg864YKia8Yz2VTtcUPYFnHVBR
Cryddit
Legendary
*
Offline Offline

Activity: 924
Merit: 1122


View Profile
December 06, 2013, 09:06:56 PM
Last edit: December 06, 2013, 09:18:19 PM by Cryddit
 #49

Nah, if you have a chain that's dedicated to a single channel then transactions in that chain only get checked by half as many people.  Besides, it's better for people transacting within a single channel to "load balance" by picking the blockchain in their channel that has the smallest number of tx in the memory pool.

And transactions that are served by one of the two-channel blockchains never need to be in blockchain zero; that's just for tx that span 3 channels or more.  Which, in practice, is almost none of 'em.

Edit: you seem to be talking about a different approach: a "tree" of subchannels, each node of the tree having its own blockchain.  I considered that, but in that scheme it's really hard for two people (or more) who want to make a transaction to find a blockchain that they have in common where the transaction can be made.  You have to have both of them going up the tree to the lowest tree node that they have in common, which would concentrate fully a quarter of the traffic on the root blockchain, split another quarter between its two subchains, split an eighth of the traffic between their four second-level subnchains, etc.  That won't scale past about four times what Bitcoin can handle now, because the root blockchain of the tree scheme would become a bottleneck.
CanaryInTheMine
Donator
Legendary
*
Offline Offline

Activity: 2352
Merit: 1060


between a rock and a block!


View Profile
December 06, 2013, 09:28:37 PM
 #50

How can this modification be used to exploit Bitcoin?  Could this change become a vector for shenanigans later?

When you can't beat it, start influencing technology selection and whoila, eventually you get your backdoors...

paranoia or food for thought?
Daily Anarchist
Hero Member
*****
Offline Offline

Activity: 614
Merit: 500



View Profile WWW
December 06, 2013, 09:31:53 PM
 #51

How can this modification be used to exploit Bitcoin?  Could this change become a vector for shenanigans later?

When you can't beat it, start influencing technology selection and whoila, eventually you get your backdoors...

paranoia or food for thought?

It's a valid concern. I don't think anybody wants to sacrifice long-term security for quicker transactions. If it gets past its initial stage of acceptance it will be heavily debated. Worst case scenario we could just fork the chain, which isn't bad in my opinion.

Discover anarcho-capitalism today!
johnyj
Legendary
*
Offline Offline

Activity: 1988
Merit: 1007


Beyond Imagination


View Profile
December 06, 2013, 09:34:25 PM
 #52

I read the paper, it seems not very clear to me, hope there are several real examples without using formulas, it is easy to miss some special situation if only using generalized formulas


johnyj
Legendary
*
Offline Offline

Activity: 1988
Merit: 1007


Beyond Imagination


View Profile
December 06, 2013, 09:35:16 PM
 #53

How can this modification be used to exploit Bitcoin?  Could this change become a vector for shenanigans later?

When you can't beat it, start influencing technology selection and whoila, eventually you get your backdoors...

paranoia or food for thought?

A hard fork will most possibly be rejected by miners, unless they have enough motivation and reached a consensus

Voodah
Sr. Member
****
Offline Offline

Activity: 266
Merit: 250



View Profile
December 06, 2013, 09:41:21 PM
 #54

How can this modification be used to exploit Bitcoin?  Could this change become a vector for shenanigans later?

When you can't beat it, start influencing technology selection and whoila, eventually you get your backdoors...

paranoia or food for thought?

That's a totally sound and valid reason to worry.

I guess, if that were the main concern after the paper is assessed to be desirable, the idea of forking the Bitcoin network as it stands with its current volume, looks like a much better option than an altogether new alt-coin. We could instantly be testing on high transaction volume.
johnyj
Legendary
*
Offline Offline

Activity: 1988
Merit: 1007


Beyond Imagination


View Profile
December 07, 2013, 12:56:40 AM
 #55

From a miner's perspective, I don't see how 1 block per second is possible: It takes at least 1-2 second to re-initialize the miner and get new work from the network, if new blocks arrive every second, it means the miner would never get ready for the real calculation to take place  Huh

adpinbr
Sr. Member
****
Offline Offline

Activity: 963
Merit: 254



View Profile
December 07, 2013, 02:22:33 AM
 #56

great lets take something extremely complex and misunderstood, and right when people start to understand it and believe in it, lets change it. Yea thats where i want to keep my money. Sarcasim aside how will this effect the distribution of bitcoins?


          ▄▄
      ▄██████▄
    ▄██▀▀  ▀▀██▄
  ▄███        ███▄
 ███            ███
███              ███
████     ██     ████
 ██████████████████
  ▀▀██▀▀ ▄▄ ▀▀██▀▀
       ▂▃▇▇▄▂
J O Y A  
      ▃▄▅▅▄▃      ▃▄▅▅▄▃
      █     █     █    █
      ▀█   █▀    █▀   █▀
   ▅██████████████████████▅
   ▀▀████████████████████▀█
 ▅▀▀▀▅  ▀▀▀▀▀████▀▀▀▀▀▀▀▀█
█ ▅ ▅ █      ████        █
█ ▀ ▀ █      ████        █
 ▀▅▅▅▀       ████        █
     ▅  ▅▅▅▅ ████ ▅▅▅▅▅  █
     ▀██████████████████▀
W E L C O M E
BONUS UP TO
900%   ▅▀
    ▀ ▀▀▀▀▀▀▀▀▀▀▀▀
▅██████████████████████▅
█ ▅▀████████████████▀▅ █
█▀█████████▀ ▀████████▀█
██████████▅▀▀▀██████████
█▅█████████▀▀▀▅███████▅█
█ ▀▅████████▅███████▅▀ █
▀██████████████████████▀
     ▅▅▅▅▅▅▅▅▅▅▅▅▅ ▅
                 ▅▀
C A S H
B A C K
20%    
▀▅▀        ▀▅▀        ▀▅▀
  █▅▅      ▅█▅      ▅▅█
   ███  ██▅███▅██  ███
   ▀██▅█████▀█████▅██▀
    ██████▀   ▀██████
    ▀█████▅   ▅█████▀
     ███████▅███████
    ▅▅█████████████▅▅▅
VIPP R O
GRAMS  
██████
██
██
██
██
██
██
██
██
██
██
██████
█████████████████████████████████████████
.
PLAY NOW
.
█████████████████████████████████████████
██████
██
██
██
██
██
██
██
██
██
██
██████
ffe
Sr. Member
****
Offline Offline

Activity: 308
Merit: 250



View Profile
December 07, 2013, 06:00:59 AM
 #57

... it's really hard for two people (or more) who want to make a transaction to find a blockchain that they have in common where the transaction can be made.  You have to have both of them going up the tree to the lowest tree node that they have in common, which would concentrate fully a quarter of the traffic on the root blockchain, split another quarter between its two subchains, split an eighth of the traffic between their four second-level subnchains, etc.  That won't scale past about four times what Bitcoin can handle now, because the root blockchain of the tree scheme would become a bottleneck.

This assumes no locality to the transactions. If transactions aggregate (by geography for example or by business type) then people will often find blockchains in common near the leaves without needing to go to the root (less often than the 25% of the time you indicate one hopes.)

Very interesting. Reminds me of banks and clearing transactions between banks within a country vs. transactions that require a transfer between banks in different countries.
gmaxwell
Moderator
Legendary
*
expert
Offline Offline

Activity: 4158
Merit: 8343



View Profile WWW
December 07, 2013, 06:43:36 AM
 #58

From a miner's perspective, I don't see how 1 block per second is possible: It takes at least 1-2 second to re-initialize the miner and get new work from the network, if new blocks arrive every second, it means the miner would never get ready for the real calculation to take place  Huh
Yep most hardware in use for mining has second to multisecond latencies, not even getting into the latencies of global propagation in a network of decentralized volunteers (rather than trivially coerced corporate interests).

But I think if you fixate on one second you're not getting the value of of the paper. Just multiply their numbers by 10 if you find 1 second is a bit outrageous considering the practicalities that their analysis excluded.
johnyj
Legendary
*
Offline Offline

Activity: 1988
Merit: 1007


Beyond Imagination


View Profile
December 07, 2013, 12:35:08 PM
 #59

From a miner's perspective, I don't see how 1 block per second is possible: It takes at least 1-2 second to re-initialize the miner and get new work from the network, if new blocks arrive every second, it means the miner would never get ready for the real calculation to take place  Huh
Yep most hardware in use for mining has second to multisecond latencies, not even getting into the latencies of global propagation in a network of decentralized volunteers (rather than trivially coerced corporate interests).

But I think if you fixate on one second you're not getting the value of of the paper. Just multiply their numbers by 10 if you find 1 second is a bit outrageous considering the practicalities that their analysis excluded.

I will read the full paper more carefully, just a glance from the paper:

"Practically, individual nodes’ limited bandwidth imposes an additional constraint on these parameters. If every node’s bandwidth is at least 0.427 MB per second, then the network is able to maintain a rate of 1 blocks per second and b = 320 KB per block, with 10000
transaction hashes per block, achieving a TPS of more than 214.0"

If drop this value from 1 block per second to 1 block per 30 second, then TPS will drop to 7, the same as current design Huh

gglon
Member
**
Offline Offline

Activity: 64
Merit: 10


View Profile
December 07, 2013, 01:08:36 PM
 #60

If drop this value from 1 block per second to 1 block per 30 second, then TPS will drop to 7, the same as current design Huh

You need to multiply blocksize b as well.
johnyj
Legendary
*
Offline Offline

Activity: 1988
Merit: 1007


Beyond Imagination


View Profile
December 07, 2013, 02:23:39 PM
 #61

If drop this value from 1 block per second to 1 block per 30 second, then TPS will drop to 7, the same as current design Huh

You need to multiply blocksize b as well.

This is what confuses me, if 32 transactions per KB, a 320KB block will make it 10240 transactions per block, not the 242.4 transactions per second that stated on that paper page 21

But since they are using a different method to propagate the blocks, maybe what they said is true? Unfortunately that 242.4 TPS is given by a formula which is unclear for its meaning, I have to read more

Peter Todd
Legendary
*
expert
Offline Offline

Activity: 1120
Merit: 1147


View Profile
December 07, 2013, 03:50:50 PM
 #62

Quote
1) Advantage of nodes with low latency. Nodes that can reach the rest of the network quickly have more blocks on the main chain and less orphans.

Answer: This is already going on in the current bitcoin protocol. We don't improve things, but our modification doesn't hurt either.

Actually it makes the problem significantly worse as more orphans leads to more opportunities for a larger miner to have an advantage over a smaller one through orphans.

Note the equations for P(Q,L) in the paper I'm working on analyzing centralization incentives - they all depend on the block interval in such a way that a smaller block interval makes the larger hashing power more significant.

I suggest you correct your post.

johnyj
Legendary
*
Offline Offline

Activity: 1988
Merit: 1007


Beyond Imagination


View Profile
December 07, 2013, 04:04:23 PM
 #63

A practical consideration: Just like Christian Decker and Roger Wattenhofer reported, a 320KB block would take 80 seconds to reach 90% of nodes, or 40 seconds for 50% of nodes. In such a case any block generated faster than 40 seconds can not reach half of the network

And if blocks were generated every 5 seconds, most of them will only reach a tiny part of the whole network no matter how small they are. So the whole network will generate lots of conflicting view of which subtree is the biggest subtree, thus become segregated

How can this problem be resolved?

kcirazy
Newbie
*
Offline Offline

Activity: 53
Merit: 0



View Profile
December 07, 2013, 04:15:16 PM
 #64

I'm working on analyzing centralization incentives

I suggest that the value of a bitcoin is directly related to the trust in the decentralization of the system.
And the level of trust in Bitcoin is somewhat related to its (de)centralization. The higher the centralization, the lower the value per block.
I don't see any mention of that in your paper about 'centralization incentives'. Shouldn't that be part of the equation?
Peter Todd
Legendary
*
expert
Offline Offline

Activity: 1120
Merit: 1147


View Profile
December 07, 2013, 04:32:46 PM
 #65

I'm working on analyzing centralization incentives

I suggest that the value of a bitcoin is directly related to the trust in the decentralization of the system.
And the level of trust in Bitcoin is somewhat related to its (de)centralization. The higher the centralization, the lower the value per block.
I don't see any mention of that in your paper about 'centralization incentives'. Shouldn't that be part of the equation?

It's an externality. If your logic was right, coal power plants wouldn't exist, but they do.

zimmah
Legendary
*
Offline Offline

Activity: 1106
Merit: 1005



View Profile
December 07, 2013, 04:40:09 PM
 #66

if a block will be created every second on average, blocks will be generated so fast that it will create huge problems.

the network will not be able to verify the order of the blocks, and it will be impossible to know which chain is the real chain.

the whole reason the block-chain handles transactions is to prevent double spending by abusing lagg that would mess up the order of the transaction, because the network disagrees about which transaction happened first.

There's no way the whole network can confirm a block within a second.
tucenaber
Sr. Member
****
Offline Offline

Activity: 337
Merit: 252


View Profile
December 07, 2013, 04:55:01 PM
 #67

There's no way the whole network can confirm a block within a second.

Clearly you have not read the paper.
iddo
Sr. Member
****
Offline Offline

Activity: 360
Merit: 251


View Profile
December 07, 2013, 05:10:02 PM
 #68

After some discussion at #bitcoin-wizards (freenode IRC) with Aviv Zohar and several Bitcoin devs, it seems to me that there is an advantage to the current Bitcoin rule for selecting the best chain, over the proposed rule in this new paper.

First note that there is ongoing research by Bitcoin devs to enable lite nodes verify whether txns are valid (link1, link2, link3), which may also allow us to remove the checkpoints in the client because new nodes can be bootstrapped without having to verify the signatures in the entire history, hence the Bitcoin system will be more trust-free.

Consider an attacker who creates many difficulty-1 orphans that extend the genesis block (which requires relatively low PoW effort) and thereby DoS-attacks the other nodes by bloating their local copy of the blockchain with these many orphans. Note that headers-first sync will mitigate this attack somewhat, because the nodes will only fetch and keep the headers of the orphans, but it wouldn't fully mitigate this attack. Right now, Bitcoin is protected from this attack because of the checkpoints, as clients will reject orphans at genesis because of the later checkpoints.

If the checkpoints are removed, I think that Bitcoin can still have anti-DoS mechanisms against this kind of an attack. For example, the most naive anti-DoS protection would be for the node to have some quota and not accept more than certain amount of forks that extend an (old) block, with small risk that it may need to request those rejected blocks later from peers, and waste communication.

So, under the assumption that eliminating checkpoints to have a zero-trust system is a worthy goal, the question is whether we can have anti-DoS protection with the proposed rule for best chain selection of this new paper. Depending on how exactly the nodes will reject blocks that are suspected to be attack-driven, I think that there is a danger that it could cause netsplits and that the network won't re-converge. Even if the network will indeed be convergent, the communication complexity could be significantly greater, as nodes will frequently need to prove to other nodes (who rejected or deleted blocks due to anti-DoS protection) that they have a subtree with a bigger weight?

I think that there are pros/cons to this new proposed rule versus the Bitcoin rule (the complexity needed for lite nodes is probably another disadvantage, and of course there are also significant advantages as discussed in the paper). At this particular moment in time I'd feel more secure to exchange my fiat money for bitcoins than for a cryptocoin that implements the newly proposed chain selection rule Smiley But I'm of course open the possibility that the new rule will lead to a system that's more secure overall.

maaku
Legendary
*
expert
Offline Offline

Activity: 905
Merit: 1011


View Profile
December 07, 2013, 06:03:17 PM
 #69

iddo, I think there are two things to point out with respect to that wizards' conversation (where we didn't really reach consensus):

1) The DoS danger can be avoided with some trivial heuristics, for example: drop or ignore orphaned blocks with less than factor X of the difficulty of the currently accepted best block, and drop blocks furthest from the current best block when orphan storage exceeds Y megabytes/gigabytes. In reality there are a spectrum of possibilities between nodes having omniscient knowledge about all forks (pure GHOST), and nodes blindly following the most-work chain (bitcoin current). Even a heuristic-limited, DoS-safe version somewhere in the middle of that spectrum would be an improvement over today.

2) This is a local-only change that does not require consensus. It's okay for light nodes to still follow the most-work chain. What this change does is provide miners specifically with a higher chance of ending up on the final chain, by better estimating which fork has the most hash power behind it.

3) What this patch actually accomplishes is a little obscured by the hyperbole of the OP. There are alt coins which had a constant difficulty, and those which had incredibly low block intervals. I'm speaking in the past tense they basically imploded when their network size increased and the propagation time approached the block interval. This patch wouldn't fix that problem (because eventually you run out of bandwidth), but it does let you get a lot closer to the network's block propagation time before witnessing catastrophic consequences. This is important because when we increase the maximum transaction processing rate of bitcoin, it will be by either dropping the block interval or increasing the maximum block size (both of which are hard forks), and how far we can safely do that is bounded by this limit, among others.

I'm an independent developer working on bitcoin-core, making my living off community donations.
If you like my work, please consider donating yourself: 13snZ4ZyCzaL7358SmgvHGC9AxskqumNxP
avivz78 (OP)
Newbie
*
Offline Offline

Activity: 21
Merit: 0


View Profile WWW
December 07, 2013, 06:15:18 PM
 #70

Quote
1) Advantage of nodes with low latency. Nodes that can reach the rest of the network quickly have more blocks on the main chain and less orphans.

Answer: This is already going on in the current bitcoin protocol. We don't improve things, but our modification doesn't hurt either.

Actually it makes the problem significantly worse as more orphans leads to more opportunities for a larger miner to have an advantage over a smaller one through orphans.

Note the equations for P(Q,L) in the paper I'm working on analyzing centralization incentives - they all depend on the block interval in such a way that a smaller block interval makes the larger hashing power more significant.

I suggest you correct your post.


Peter, I agree with you that there is a big problem with high orphan rates, but this is not a symptom of GHOST, but rather a symptom of high transaction rates.

Whether you use GHOST or Longest-chain at high rates you must either resort to large blocks that propagate slowly or to high block creation rates. There is no getting around that. Both cause a high orphan rate (or perhaps the right term to use is high rate of off-chain blocks). we do not pretend GHOST solves this problem The only thing we claim is that GHOST makes the protocol secure at high rates -- the 50% attack can no longer be executed with less than 50% of the hash rate.
mission
Newbie
*
Offline Offline

Activity: 21
Merit: 0


View Profile
December 07, 2013, 07:03:41 PM
 #71

Some questions to devs please: what's the chance to see it implemented in Bitcoin? Would it take very long? What's the biggest obstacle?
maaku
Legendary
*
expert
Offline Offline

Activity: 905
Merit: 1011


View Profile
December 07, 2013, 07:58:24 PM
 #72

Some questions to devs please: what's the chance to see it implemented in Bitcoin? Would it take very long? What's the biggest obstacle?

It would need to be post-pruning to prevent DoS attacks.

EDIT: But to be clear, it gives ~0 benefit until you hard-fork to significantly increase the transaction rate.

I'm an independent developer working on bitcoin-core, making my living off community donations.
If you like my work, please consider donating yourself: 13snZ4ZyCzaL7358SmgvHGC9AxskqumNxP
Peter Todd
Legendary
*
expert
Offline Offline

Activity: 1120
Merit: 1147


View Profile
December 07, 2013, 08:46:52 PM
 #73

Peter, I agree with you that there is a big problem with high orphan rates, but this is not a symptom of GHOST, but rather a symptom of high transaction rates.

Whether you use GHOST or Longest-chain at high rates you must either resort to large blocks that propagate slowly or to high block creation rates. There is no getting around that. Both cause a high orphan rate (or perhaps the right term to use is high rate of off-chain blocks). we do not pretend GHOST solves this problem The only thing we claim is that GHOST makes the protocol secure at high rates -- the 50% attack can no longer be executed with less than 50% of the hash rate.

Going back to your original post:

Quote
We note that block propagation times are the primary obstacle for scalability.

The obstacle to scalability is keeping Bitcoin decentralized while scaling up; we know that Bitcoin can scale if we sacrifice decentralization - Visa and Mastercard are doing just fine. Ultimately you're proposing something that solves one problem - the high granularity of confirmations and the long wait associated with them - at the expense of scalability with decentralization. So don't claim you've done anything other than presented an interesting academic analysis of a specific trade-off possible in the system.

This also suggests why the Bitcoin community has talked about the underlying idea in your paper repeatedly(1) among themselves, but no-one has ever bothered to investigate it more fully - it's obviously a bad trade-off in the context of proof-of-work. (with the possible exception of applying the idea to the p2pool share-chain)

h
1) I myself proposed it for my zookeyv key-value global consensus system proposal - #bitcoin-wizards 2013-05-31 - though in the context of proof-of-sacrifice.

iddo
Sr. Member
****
Offline Offline

Activity: 360
Merit: 251


View Profile
December 07, 2013, 09:21:48 PM
 #74

iddo, I think there are two things to point out with respect to that wizards' conversation (where we didn't really reach consensus):

This is true, so far no one claimed that this chain selection rule cannot withstand DoS (when checkpoints are removed), only that it's unclear whether it can withstand DoS.

1) The DoS danger can be avoided with some trivial heuristics, for example: drop or ignore orphaned blocks with less than factor X of the difficulty of the currently accepted best block, and drop blocks furthest from the current best block when orphan storage exceeds Y megabytes/gigabytes. In reality there are a spectrum of possibilities between nodes having omniscient knowledge about all forks (pure GHOST), and nodes blindly following the most-work chain (bitcoin current). Even a heuristic-limited, DoS-safe version somewhere in the middle of that spectrum would be an improvement over today.

Those heuristics are trivial, whether they work is less trivial Smiley It seems that when you do such heuristics with the GHOST chain selection rule, it's more sensitive (compared to Bitcoin) to netsplits/convergence issues, and/or to communication blowup as nodes might need to negotiate with their peers regarding which missing solved blocks should be re-transmitted. This will of course be easier to discuss according to a certain specific proposal for anti-DoS rules, but it's probably difficult to design on paper the best rules and have a theoretical proof that they work, therefore (as mentioned in #bitcoin-wizards) the thing that we really need is to do simulations.

2) This is a local-only change that does not require consensus. It's okay for light nodes to still follow the most-work chain. What this change does is provide miners specifically with a higher chance of ending up on the final chain, by better estimating which fork has the most hash power behind it.

Not sure why you call it "local-only change", because as we are discussing here, it appears that there's a risk for netsplits that don't re-converge, especially if different nodes will follow different rules.

3) What this patch actually accomplishes is a little obscured by the hyperbole of the OP. There are alt coins which had a constant difficulty, and those which had incredibly low block intervals. I'm speaking in the past tense they basically imploded when their network size increased and the propagation time approached the block interval. This patch wouldn't fix that problem (because eventually you run out of bandwidth), but it does let you get a lot closer to the network's block propagation time before witnessing catastrophic consequences. This is important because when we increase the maximum transaction processing rate of bitcoin, it will be by either dropping the block interval or increasing the maximum block size (both of which are hard forks), and how far we can safely do that is bounded by this limit, among others.

Agreed.
maaku
Legendary
*
expert
Offline Offline

Activity: 905
Merit: 1011


View Profile
December 07, 2013, 09:36:21 PM
 #75

2) This is a local-only change that does not require consensus. It's okay for light nodes to still follow the most-work chain. What this change does is provide miners specifically with a higher chance of ending up on the final chain, by better estimating which fork has the most hash power behind it.

Not sure why you call it "local-only change", because as we are discussing here, it appears that there's a risk for netsplits that don't re-converge, especially if different nodes will follow different rules.
There is no possibility for non-convergence. The most-work tree eventually wins out. Always.

I'm an independent developer working on bitcoin-core, making my living off community donations.
If you like my work, please consider donating yourself: 13snZ4ZyCzaL7358SmgvHGC9AxskqumNxP
avivz78 (OP)
Newbie
*
Offline Offline

Activity: 21
Merit: 0


View Profile WWW
December 07, 2013, 09:40:52 PM
 #76

Going back to your original post:

Quote
We note that block propagation times are the primary obstacle for scalability.

The obstacle to scalability is keeping Bitcoin decentralized while scaling up; we know that Bitcoin can scale if we sacrifice decentralization - Visa and Mastercard are doing just fine. Ultimately you're proposing something that solves one problem - the high granularity of confirmations and the long wait associated with them - at the expense of scalability with decentralization. So don't claim you've done anything other than presented an interesting academic analysis of a specific trade-off possible in the system.

Peter,

I keep trying to agree with you.

One last try before I give up:

Surely you must agree that if there were no block propagation delays, there would be no centralization associated with scaling up. Blocks go instantly to everyone, and there are no problems with miners getting more than their share of the blocks.

I guess the difference in our perspectives is that I don't think these are optional tradeoffs. If at some point bitcoin becomes mainstream, there will be an increased pressure to allow more transactions through (or alternatively, fees will soar higher than the money transmitters Bitcoin is trying to replace). You then run into several problems. One is a security problem against attackers. The other is a pull towards centralization. All I'm saying is that we claim to solve the first problem. The second clearly remains.
avivz78 (OP)
Newbie
*
Offline Offline

Activity: 21
Merit: 0


View Profile WWW
December 07, 2013, 09:47:33 PM
 #77

Some questions to devs please: what's the chance to see it implemented in Bitcoin? Would it take very long? What's the biggest obstacle?

It would need to be post-pruning to prevent DoS attacks.

EDIT: But to be clear, it gives ~0 benefit until you hard-fork to significantly increase the transaction rate.

maaku, can you provide a link to how Bitcoin would prevent similar DoS attacks (without checkpoints that is)?

I'd like to address that critique, but I don't know how it's solved in Bitcoin.
iddo
Sr. Member
****
Offline Offline

Activity: 360
Merit: 251


View Profile
December 07, 2013, 10:09:29 PM
 #78

2) This is a local-only change that does not require consensus. It's okay for light nodes to still follow the most-work chain. What this change does is provide miners specifically with a higher chance of ending up on the final chain, by better estimating which fork has the most hash power behind it.

Not sure why you call it "local-only change", because as we are discussing here, it appears that there's a risk for netsplits that don't re-converge, especially if different nodes will follow different rules.
There is no possibility for non-convergence. The most-work tree eventually wins out. Always.

There could be an extreme scenario where two fractions of the network work on different trees, and each fraction uses anti-DoS rule that rejects blocks from the tree of the other fraction unless they arrive in a batch that proves that the PoW of the competing tree wins, but because of propagation lag each fraction solves more PoW blocks in its own tree so by the time the batch arrives from the other tree, that batch is already losing and therefore rejected. No?
avivz78 (OP)
Newbie
*
Offline Offline

Activity: 21
Merit: 0


View Profile WWW
December 07, 2013, 10:41:14 PM
 #79

There is no possibility for non-convergence. The most-work tree eventually wins out. Always.

There could be an extreme scenario where two fractions of the network work on different trees, and each fraction uses anti-DoS rule that rejects blocks from the tree of the other fraction unless they arrive in a batch that proves that the PoW of the competing tree wins, but because of propagation lag each fraction solves more PoW blocks in its own tree so by the time the batch arrives from the other tree, that batch is already losing and therefore rejected. No?

Iddo, a collapse has to occur. This is exactly the same reasoning as in Theorem 8.6 in the paper: block construction is stochastic, so the two forks are not synchronized perfectly even if they have exactly the same amount of hash power supporting them.  The differences in block creation times drift apart until they grow larger than the time it takes to send the message about the total weight in the branch. Thus, a collapse eventually occurs.

With anti-Dos it's highly unlikely that you get into these sort of forks in the first place: you do accept all blocks created, say, in the past month or so (difficulty is high enough that they are not really a DoS attack). A fork that breaks DoS rules will only form if 1) an attacker manages to keep up with the network's rate for a whole month (has to be a 50% attack) 2) the network is disconnected for over a month. Even in these cases, the forks resolve rather quickly.

Besides, another heuristic can also help here: accept alternative branches into the tree even if they have 80% of the total weight of your current branch. This is still enough to stop a DoS, and certainly enough to make DoS forks collapse instantly.
iddo
Sr. Member
****
Offline Offline

Activity: 360
Merit: 251


View Profile
December 07, 2013, 11:29:45 PM
 #80

There is no possibility for non-convergence. The most-work tree eventually wins out. Always.

There could be an extreme scenario where two fractions of the network work on different trees, and each fraction uses anti-DoS rule that rejects blocks from the tree of the other fraction unless they arrive in a batch that proves that the PoW of the competing tree wins, but because of propagation lag each fraction solves more PoW blocks in its own tree so by the time the batch arrives from the other tree, that batch is already losing and therefore rejected. No?

Iddo, a collapse has to occur. This is exactly the same reasoning as in Theorem 8.6 in the paper: block construction is stochastic, so the two forks are not synchronized perfectly even if they have exactly the same amount of hash power supporting them.  The differences in block creation times drift apart until they grow larger than the time it takes to send the message about the total weight in the branch. Thus, a collapse eventually occurs.

Right, apologies, I agree that this will eventually re-converge. But there still could be an interesting question here regarding whether it takes a long time for the two fractions to re-converge, depending on the anti-DoS rules that they deploy.

With anti-Dos it's highly unlikely that you get into these sort of forks in the first place: you do accept all blocks created, say, in the past month or so (difficulty is high enough that they are not really a DoS attack). A fork that breaks DoS rules will only form if 1) an attacker manages to keep up with the network's rate for a whole month (has to be a 50% attack) 2) the network is disconnected for over a month. Even in these cases, the forks resolve rather quickly.

Besides, another heuristic can also help here: accept alternative branches into the tree even if they have 80% of the total weight of your current branch. This is still enough to stop a DoS, and certainly enough to make DoS forks collapse instantly.

Yes, those can be sensible anti-DoS rules. But if the node accepts all blocks in the past month, we may also need a rule to delete the old blocks in the losing subtrees and thereby reclaim disk space. As Andrew Miller said in #bitcoin-wizards, in terms of incentives there's a tradeoff here since keeping some orphans around will potentially save on future bandwidth, but at the cost of storage now.
johnyj
Legendary
*
Offline Offline

Activity: 1988
Merit: 1007


Beyond Imagination


View Profile
December 08, 2013, 01:39:27 AM
 #81

Quote
We note that block propagation times are the primary obstacle for scalability.

This is the root of the concern, there is no way to increase the TPS above certain level unless the propagation time for a large amount of transactions greatly improve (means worldwide upgrading of IP backbone and ISP infrastructure)

However, as discussed many times before, people normally don't change the bank just because it is closed during the weekend, they wait until the next Monday to do business instead.

Same, if people realize the bitcoin network worldwide transaction capacity is limited, they will also change their behavior accordingly. When the fee is getting very high, people will plan their transaction more carefully, periodically making large transactions instead of many small transactions. And this is good for bitcoin, since that will make more people treat it as a long term investment, and reduce its volatility





Voodah
Sr. Member
****
Offline Offline

Activity: 266
Merit: 250



View Profile
December 08, 2013, 02:34:54 AM
 #82

Quote
We note that block propagation times are the primary obstacle for scalability.

This is the root of the concern, there is no way to increase the TPS above certain level unless the propagation time for a large amount of transactions greatly improve (means worldwide upgrading of IP backbone and ISP infrastructure)

However, as discussed many times before, people normally don't change the bank just because it is closed during the weekend, they wait until the next Monday to do business instead.

Same, if people realize the bitcoin network worldwide transaction capacity is limited, they will also change their behavior accordingly. When the fee is getting very high, people will plan their transaction more carefully, periodically making large transactions instead of many small transactions. And this is good for bitcoin, since that will make more people treat it as a long term investment, and reduce its volatility


Please don't take us to such a future. High fees pose a barrier to entry for under-developed countries.

A few cents of a dollar may not be much for US or EU citizens, but they're worth 10x here in Argentina, and lot more Africa.

Just for reference: currently if I'd like to sell a 1 Peso (ARG) item using BTC, I would have to pay 0.68 Pesos in fees (using a bare minimum of 0.0001).
Cryddit
Legendary
*
Offline Offline

Activity: 924
Merit: 1122


View Profile
December 08, 2013, 03:48:30 AM
 #83

I agree.  That is NOT a good future for Bitcoin, and Bitcoin as an investment is far less stable than Bitcoin as a currency.

With an investment that is widely held but traded in small volume, a single trade involving a significant amount can make the market swing wildly.  If most coin is in the market most of the time, the same single trade has little effect on the market. 

Also, I consider it more important to undercut competitors for money movement than it is to charge according to the blockspace occupied - so I'd be all for making fees proportional to the amount transferred rather than proportional to the space occupied.  The miners might still prioritize tx by fee per kilobyte, but IMO that ought to mean that transactions in larger amounts naturally have priority to go through faster. 
mmeijeri
Hero Member
*****
Offline Offline

Activity: 714
Merit: 500

Martijn Meijering


View Profile
December 08, 2013, 08:10:34 AM
 #84

This is the root of the concern, there is no way to increase the TPS above certain level unless the propagation time for a large amount of transactions greatly improve (means worldwide upgrading of IP backbone and ISP infrastructure)

How much potential is there for a more efficient wire encoding / data compression?

ROI is not a verb, the term you're looking for is 'to break even'.
darkmule
Legendary
*
Offline Offline

Activity: 1176
Merit: 1005



View Profile
December 08, 2013, 11:45:41 AM
 #85

Hi all,

I'm a researcher at the Hebrew University in Israel, and I've been working with a student (Yonatan Sompolinsky) on a Bitcoin related paper. We have really exciting results that we are eager to share with the Bitcoin community. We would really like to get constructive feedback on our work from the many smart people in the community, which is why I am posting here.

Thank you for what looks like an excellent solution to a future problem that has troubled me for a while about BTC.  I honestly don't have the brains to analyze it, but just wanted to say thanks.  This kind of work is the road to the future of cryptocurrencies, and it is great that people are doing it.
yaffare
Newbie
*
Offline Offline

Activity: 45
Merit: 0


View Profile
December 08, 2013, 04:15:58 PM
 #86

If you are connected to 125 nodes, then for 200 TPS only the hashes already are 0.8 MBps:

125 * 32 * 200 = 800000 = 0.8 MBps

People often forget that.

Even if they can teleport the blocks and the transactions. The hashes alone are > 0.5 MBps.
GoldenWings91
Full Member
***
Offline Offline

Activity: 141
Merit: 100


View Profile
December 08, 2013, 06:35:19 PM
 #87

If you are connected to 125 nodes, then for 200 TPS only the hashes already are 0.8 MBps:

125 * 32 * 200 = 800000 = 0.8 MBps

People often forget that.

Even if they can teleport the blocks and the transactions. The hashes alone are > 0.5 MBps.

At maximum hashes would only need to be 16 8 byte first bits, likely even less.

125 * 8 * 200 = 200000 = 0.2 MBps

Support The Bitcoin Network By Running A Full Node
Node Stats     GPG Key-ID: 0x445DF2D8     Monetary Freedom Is A Basic Human Right
caveden
Legendary
*
Offline Offline

Activity: 1106
Merit: 1004



View Profile
December 08, 2013, 07:26:46 PM
 #88

If you are connected to 125 nodes, then for 200 TPS only the hashes already are 0.8 MBps:

Why would you need to be connected to that many full nodes? I don't see the need for so many connections. And SPV nodes only care about the transactions that concern them, which are always a very small set compared to the total that passes by on the network.

At maximum hashes would only need to be 16 8 byte first bits, likely even less.

125 * 8 * 200 = 200000 = 0.2 MBps

Aren't you mixing bytes and bits there? Bps normally means bits per second, not bytes. And 8 bits only for the hashes seems too low. You only get 256 different values, with 200tps you'd have lots of collisions. 8 bytes on the other hand is even more than the poster you replied to was talking about (he mentioned 32 bits, so 4 bytes).
GoldenWings91
Full Member
***
Offline Offline

Activity: 141
Merit: 100


View Profile
December 08, 2013, 07:35:24 PM
 #89

Quote
Aren't you mixing bytes and bits there? Bps normally means bits per second, not bytes. And 8 bits only for the hashes seems too low. You only get 256 different values, with 200tps you'd have lots of collisions. 8 bytes on the other hand is even more than the poster you replied to was talking about (he mentioned 32 bits, so 4 bytes).

My mistake I was thinking he meant 32 bytes.

Support The Bitcoin Network By Running A Full Node
Node Stats     GPG Key-ID: 0x445DF2D8     Monetary Freedom Is A Basic Human Right
jtimon
Legendary
*
Offline Offline

Activity: 1372
Merit: 1002


View Profile WWW
December 08, 2013, 10:34:43 PM
 #90

Going back to your original post:

Quote
We note that block propagation times are the primary obstacle for scalability.

The obstacle to scalability is keeping Bitcoin decentralized while scaling up; we know that Bitcoin can scale if we sacrifice decentralization - Visa and Mastercard are doing just fine. Ultimately you're proposing something that solves one problem - the high granularity of confirmations and the long wait associated with them - at the expense of scalability with decentralization. So don't claim you've done anything other than presented an interesting academic analysis of a specific trade-off possible in the system.

Peter,

I keep trying to agree with you.

One last try before I give up:

Surely you must agree that if there were no block propagation delays, there would be no centralization associated with scaling up. Blocks go instantly to everyone, and there are no problems with miners getting more than their share of the blocks.

No, I bet he won't agree on that.
Let's say we want the network to validate 1 million tps. Even if nodes communicate with each other using the ansible (zero block propagation delays) you need all full nodes to validate 1 million tps. How many full nodes do you think we would have?

If people stupidly complain about ASIC miners "causing centralization" or "SHA256 being less democratic than scrypt", wait to see what they have to tell about having a room-sized super-computer (I heard NASDAQ market execution computer is that big) alongside their ASICs machines for Pow. Well, they will need that computer even for full nodes that don't mine.

I guess the difference in our perspectives is that I don't think these are optional tradeoffs. If at some point bitcoin becomes mainstream, there will be an increased pressure to allow more transactions through (or alternatively, fees will soar higher than the money transmitters Bitcoin is trying to replace). You then run into several problems. One is a security problem against attackers. The other is a pull towards centralization. All I'm saying is that we claim to solve the first problem. The second clearly remains.

Bitcoin will never be able to process the tps the full dollar system (with all their banks hierarchy) on its own.
We need off-chain transactions, micro-transactions between two parties with transaction replacement, etc.
p2p currencies are a trust-less extreme, but there's other monies that beautifully leverage trust, we need them to complement each other (I'm talking about LETS and other local currencies, ripple-like credit networks, simple IOUs, etc).
Technologies like Freimarkets (the private chains part), two-phase-commit Ripple or Open Transactions are not as trust-less as bitcoin, but they scale much better.
So, yes, this is a tradeoff and we know bitcoin can't scale to be a global near-monopoly money like the usd is today.
Luckily there's monetary innovations out there that will complement bitcoin much better than national currencies do today.

EDIT: I still find the proposal interesting, just wanted to explain that propagation times are not the only obstacle for scalability.

2 different forms of free-money: Freicoin (free of basic interest because it's perishable), Mutual credit (no interest because it's abundant)
midnightmagic
Member
**
Offline Offline

Activity: 88
Merit: 37


View Profile
December 09, 2013, 03:21:34 AM
Last edit: December 09, 2013, 03:55:03 AM by midnightmagic
 #91

Hello avivz..

If a heaviest subtree determines current network head, then two things will be possible with a 50% attacker that aren't currently possible now:

1) You can actually strip off head and the chain length can move backwards. (This includes rewinding back past a difficulty retarget, which is currently impossible.[1])

2) You can create a scenario where two mining factions create divergent forks which simply add to their own subtree weight, and the main chain length goes ~nowhere.

With the way bitcoin is currently built, neither of these two possibilities is.. er.. possible. All 50% attacks must, even though they rewrite history, move head forwards.

Whether this has any meaning in terms of risk to bitcoin users I suppose is another matter than the point of my post. Did you address these possibilities in your paper? I ask because I read a good chunk of it, but not all of it, for which I apologise. I am extremely happy to see academic work being done on bitcoin.

At the very least, I'd like to encourage you (as I am a random internet nobody Smiley to create an experimental fork with this idea built in as a testnet alternative.

Hrm.. I suppose in terms of user software, all current infrastructure which makes assumptions about the current rules would be exposed to risk by a switch to ghost-enabled bitcoind..

[1] maaku pointed out people may be confused with my terminology: a replacement of work can "rewind" a chain in the sense that the history is substituted with another history of greater work in a non-GHOST mainline: "Rewinding" in the sense I mean it here is to strip off the numerical topmost block so that mainline head is actually prior to the retarget point: using GHOST orphans it is possible to erase the retarget and make it historically as though it never happened.
midnightmagic
Member
**
Offline Offline

Activity: 88
Merit: 37


View Profile
December 09, 2013, 03:37:45 AM
 #92

Also, in the event this paper doesn't become an accepted norm for bitcoin itself, please consider applying the ideas to a p2pool-like mining sharechain. Perhaps there is some way I am currently unable to think of that it could be applied in such a way that DOA and orphans in general just *go away*.

p2pool in some future incarnation is currently the most bitcoin-philosophically-compatible method of mining, and Gavin has stated in the past that some form of it in C++ he would champion getting into the bitcoind mainline in order to enable longtail(my word, not his) mining by random people.
avivz78 (OP)
Newbie
*
Offline Offline

Activity: 21
Merit: 0


View Profile WWW
December 09, 2013, 06:37:49 AM
 #93


No, I bet he won't agree on that.
Let's say we want the network to validate 1 million tps. Even if nodes communicate with each other using the ansible (zero block propagation delays) you need all full nodes to validate 1 million tps. How many full nodes do you think we would have?

If people stupidly complain about ASIC miners "causing centralization" or "SHA256 being less democratic than scrypt", wait to see what they have to tell about having a room-sized super-computer (I heard NASDAQ market execution computer is that big) alongside their ASICs machines for Pow. Well, they will need that computer even for full nodes that don't mine.

Nobody said that propagation delays are the *only* obstacle. We said they are currently the *primary* obstacle, because currently you will run into the trouble at much lower tps.



I guess the difference in our perspectives is that I don't think these are optional tradeoffs. If at some point bitcoin becomes mainstream, there will be an increased pressure to allow more transactions through (or alternatively, fees will soar higher than the money transmitters Bitcoin is trying to replace). You then run into several problems. One is a security problem against attackers. The other is a pull towards centralization. All I'm saying is that we claim to solve the first problem. The second clearly remains.

Bitcoin will never be able to process the tps the full dollar system (with all their banks hierarchy) on its own.
We need off-chain transactions, micro-transactions between two parties with transaction replacement, etc.
p2p currencies are a trust-less extreme, but there's other monies that beautifully leverage trust, we need them to complement each other (I'm talking about LETS and other local currencies, ripple-like credit networks, simple IOUs, etc).
Technologies like Freimarkets (the private chains part), two-phase-commit Ripple or Open Transactions are not as trust-less as bitcoin, but they scale much better.
So, yes, this is a tradeoff and we know bitcoin can't scale to be a global near-monopoly money like the usd is today.
Luckily there's monetary innovations out there that will complement bitcoin much better than national currencies do today.

EDIT: I still find the proposal interesting, just wanted to explain that propagation times are not the only obstacle for scalability.

I agree with you that it is quite likely that BTC is less scalable than the dollar and other centralized systems, but it is worth it to try and get as close as possible. Plus, you still can't completely rule out the chance that someone will find a way to distribute the workload and not just distribute trust among the many nodes.
avivz78 (OP)
Newbie
*
Offline Offline

Activity: 21
Merit: 0


View Profile WWW
December 09, 2013, 06:45:13 AM
 #94

Hello avivz..

If a heaviest subtree determines current network head, then two things will be possible with a 50% attacker that aren't currently possible now:

1) You can actually strip off head and the chain length can move backwards. (This includes rewinding back past a difficulty retarget, which is currently impossible.[1])

2) You can create a scenario where two mining factions create divergent forks which simply add to their own subtree weight, and the main chain length goes ~nowhere.

With the way bitcoin is currently built, neither of these two possibilities is.. er.. possible. All 50% attacks must, even though they rewrite history, move head forwards.

Whether this has any meaning in terms of risk to bitcoin users I suppose is another matter than the point of my post. Did you address these possibilities in your paper? I ask because I read a good chunk of it, but not all of it, for which I apologise. I am extremely happy to see academic work being done on bitcoin.

At the very least, I'd like to encourage you (as I am a random internet nobody Smiley to create an experimental fork with this idea built in as a testnet alternative.

Hrm.. I suppose in terms of user software, all current infrastructure which makes assumptions about the current rules would be exposed to risk by a switch to ghost-enabled bitcoind..

[1] maaku pointed out people may be confused with my terminology: a replacement of work can "rewind" a chain in the sense that the history is substituted with another history of greater work in a non-GHOST mainline: "Rewinding" in the sense I mean it here is to strip off the numerical topmost block so that mainline head is actually prior to the retarget point: using GHOST orphans it is possible to erase the retarget and make it historically as though it never happened.


I don't think I understood your first point. I'll be happy if you could elaborate: are you referring to something regarding the re-targeting moment?

As for number 2: it seems like if there are two factions insisting on building split subtrees then one of these trees eventually grows larger than the other (even if they are evenly matched! See our analysis of "collapse" in the paper for this exact scenario) in this case one of those factions is not abiding by the protocol if it doesn't switch to the other tree -- this is a 50% attacker.
maaku
Legendary
*
expert
Offline Offline

Activity: 905
Merit: 1011


View Profile
December 09, 2013, 06:58:52 AM
 #95

Nobody said that propagation delays are the *only* obstacle. We said they are currently the *primary* obstacle, because currently you will run into the trouble at much lower tps.

That is incorrect. We are already seeing centralization due to the enormous costs of running a full node. The number of full nodes in operation is declining, despite increased awareness of bitcoin. And currently the network propagation time is too small to have any meaningful impact. The trouble spots in scaling bitcoin are elsewhere at this time, I'm afraid.

I'm an independent developer working on bitcoin-core, making my living off community donations.
If you like my work, please consider donating yourself: 13snZ4ZyCzaL7358SmgvHGC9AxskqumNxP
avivz78 (OP)
Newbie
*
Offline Offline

Activity: 21
Merit: 0


View Profile WWW
December 09, 2013, 07:05:16 AM
 #96

Nobody said that propagation delays are the *only* obstacle. We said they are currently the *primary* obstacle, because currently you will run into the trouble at much lower tps.

That is incorrect. We are already seeing centralization due to the enormous costs of running a full node. The number of full nodes in operation is declining, despite increased awareness of bitcoin. And currently the network propagation time is too small to have any meaningful impact. The trouble spots in scaling bitcoin are elsewhere at this time, I'm afraid.


maaku, how is this related to scalability? You could have 0 transactions going through Bitcoin and still the cost of running a node would be high. The reason for that is high competition in mining brought on by the costs of specialized hardware. I don't see the connection.
maaku
Legendary
*
expert
Offline Offline

Activity: 905
Merit: 1011


View Profile
December 09, 2013, 07:27:50 AM
 #97

I'm not talking about mining nodes - I'm talking about regular old transaction-processing instances of bitcoind. These have declined significantly, according to statistics I've seen shared on #bitcoin-dev. I admit to being a contributing statistic here - I used to run an instance of bitcoind behind my home router, but no longer do because my family complains about the terrible impact it has on their browsing experience. At somewhere less than 1Mbps plus other combinations of factors, I'm already removed from the set of people able to run a full node (although personally I care enough to buy a VPS instead).

Bitcoin derives extensive value from its decentralization. If you increase transaction processing to such a point that even the average techie can no longer run a fully validating relay node with resources they have under their immediate control, then you've drastically changed the nature of the system. If you push further such that datacenter like hardware is required to run a fully validating node.. then what remains as the point? You'd have created a supposedly decentralized system that only the elite can afford to audit.

The question is now "how do we run the bitcoin protocol at a higher tps?" it is "how do we scale up the tps while keeping it decentralized?" That is a much harder problem.

I'm an independent developer working on bitcoin-core, making my living off community donations.
If you like my work, please consider donating yourself: 13snZ4ZyCzaL7358SmgvHGC9AxskqumNxP
avivz78 (OP)
Newbie
*
Offline Offline

Activity: 21
Merit: 0


View Profile WWW
December 09, 2013, 09:52:22 AM
 #98

maaku,

That's very interesting. Can you pinpoint the reason for this? Can you elaborate on your setup?

In theory(*), there shouldn't be a very high networking load at the moment.
Blocks are currently under 1MB, and there is less than one transaction per second. This implies sending 1MB once every 10 minutes, + around 1/2 KB per second for streaming transactions. All of this should be multiplied at most by the number of connections your client has, but in practice, most nodes should receive flooded messages and don't have to forward to others (Edit: in expectation, every node receives a message once, and so every node should in expectation only send each message out once). This shouldn't be too prohibitive...

There is additional load due to new nodes that come online and need to catch up and so may download many blocks at the same time. Perhaps this is the source of the trouble?
(Edit: I'm also ignoring inv messages and other small protocol messages above)



(*) as they say: in theory, there is no difference between theory and practice, in practice, there is.
jimbobway
Legendary
*
Offline Offline

Activity: 1304
Merit: 1014



View Profile
December 09, 2013, 11:26:50 AM
 #99

Is this another solution to the general's byzantine problem to prevent double spending?  If so can you rewrite the solution in terms of generals taking over a fort so a layman could understand it?  Thanks in advanced.
Voodah
Sr. Member
****
Offline Offline

Activity: 266
Merit: 250



View Profile
December 09, 2013, 01:35:41 PM
 #100

Is this another solution to the general's byzantine problem to prevent double spending?  If so can you rewrite the solution in terms of generals taking over a fort so a layman could understand it?  Thanks in advanced.

They propose a different to way to choose the correct chain at forks, which would reduce chain size and allow for a greater amount of transactions per second.
jtimon
Legendary
*
Offline Offline

Activity: 1372
Merit: 1002


View Profile WWW
December 09, 2013, 02:27:11 PM
 #101

Nobody said that propagation delays are the *only* obstacle. We said they are currently the *primary* obstacle, because currently you will run into the trouble at much lower tps.

That's what I understood here:

Surely you must agree that if there were no block propagation delays, there would be no centralization associated with scaling up. Blocks go instantly to everyone, and there are no problems with miners getting more than their share of the blocks.

I agree with you that it is quite likely that BTC is less scalable than the dollar and other centralized systems, but it is worth it to try and get as close as possible. Plus, you still can't completely rule out the chance that someone will find a way to distribute the workload and not just distribute trust among the many nodes.

That's were we disagree, I think. As close as possible may be sacrificing too much decentralization.
As said, this doesn't mean that the proposal isn't interesting for increasing scalability to a "decentralized-enough level", whatever that is.

Plus, you still can't completely rule out the chance that someone will find a way to distribute the workload and not just distribute trust among the many nodes.

Distributing the validation workload would be ideal. But I think that such a system (if possible) would be very different from bitcoin in several ways.
You would need some zero knowledge proof (ZKP) system so that the rest of the nodes could trust a single validator. I'm also very interested in that kind of proposals.

2 different forms of free-money: Freicoin (free of basic interest because it's perishable), Mutual credit (no interest because it's abundant)
iddo
Sr. Member
****
Offline Offline

Activity: 360
Merit: 251


View Profile
December 09, 2013, 03:06:30 PM
 #102

jtimon, why zero-knowledge? And relying on a single validator is bad, because that validator could deny txns.
jtimon
Legendary
*
Offline Offline

Activity: 1372
Merit: 1002


View Profile WWW
December 09, 2013, 03:15:01 PM
 #103

jtimon, why zero-knowledge? And relying on a single validator is bad, because that validator could deny txns.

I don't mean a single validator. But if validator A validates tx1 and you don't want all the other validators to also validate tx1, they could use a ZKP provided by validator A.
If validator A doesn't want to validate tx 1, validator B can do it, no problem.

Maybe there's other ways to distribute the validation workload, but that's what initially comes to mind.
Another approach could be to don't validate anything but timestamp everything and leave the validation to non-miner nodes using the sequence of transactions, but that looks less SPV-friendly.
I was just thinking out loud, but this is kind of off-topic.

2 different forms of free-money: Freicoin (free of basic interest because it's perishable), Mutual credit (no interest because it's abundant)
TierNolan
Legendary
*
Offline Offline

Activity: 1232
Merit: 1077


View Profile
December 09, 2013, 03:47:09 PM
 #104

Maybe there's other ways to distribute the validation workload, but that's what initially comes to mind.

Is it not the case that checking zero-knowledge proofs is more complex than actually doing the calculations directly? 

For example, would checking a zero knowledge proof that a signature is valid be faster than just checking if the signature was valid?

Anyway, why not just have different nodes validate a sub-set of the transactions. 

Each node picks a random byte and only validates transaction inputs that start with that byte.

This could be combined with a system where a broadcast can be made if a block contains an invalid transaction.

Nodes could still check the entire block, as now, except that only 1/256 of the signatures would be checked.

In addition, node would only need to store 1/256 of the transactions in each block.  That reduces disk usage and CPU load.

Checking that minting/tx fees are correct doesn't use much CPU.  The main cost is bandwidth.

Maybe the Blook filter system could be used rather than a prefix.  A node specifies a Bloom filter and only ever receives those transactions.  A reduced size block packet could be sent which just has the hashes of the transactions.

Transaction inputs that point to non-existent transactions are a problem though.  There would need to be some DOS protection for messages that transaction inputs are invalid since they don't point to an actual transaction output.  You can't prove non-existence.

1LxbG5cKXzTwZg9mjL3gaRE835uNQEteWF
iddo
Sr. Member
****
Offline Offline

Activity: 360
Merit: 251


View Profile
December 09, 2013, 04:40:16 PM
 #105

Guys, what you're talking about doesn't benefit from zero-knowledge, please google "succinct non-interactive argument of knowledge" or "probabilistically checkable proof", i.e. you'd like to have SNARK and not necessarily zk-SNARK here.
jtimon
Legendary
*
Offline Offline

Activity: 1372
Merit: 1002


View Profile WWW
December 09, 2013, 07:26:46 PM
 #106

Guys, what you're talking about doesn't benefit from zero-knowledge, please google "succinct non-interactive argument of knowledge" or "probabilistically checkable proof", i.e. you'd like to have SNARK and not necessarily zk-SNARK here.

Sorry, iddo but I'm just starting to learn about zkp/snark/scip.
I thought snark was just a technique to generalize ZKP, I didn't knew about snark vs zk-snark.
What I mean you need this kind of cryptography if you want all nodes to trust a node's validation without repeating it themselves.

Is it not the case that checking zero-knowledge proofs is more complex than actually doing the calculations directly?  

For example, would checking a zero knowledge proof that a signature is valid be faster than just checking if the signature was valid?

If that was the case for both zkp and snark, then they are useless for distributing the validation workload.
But I think it's not the case. Maybe someone who knows more about this can answer you better.

Anyway, why not just have different nodes validate a sub-set of the transactions.  
...

Yes, that's what "distributing the validation workload" is about.
I'm not sure I understand your proposal or why some nodes can trust others, but this is definitely getting off-topic.

2 different forms of free-money: Freicoin (free of basic interest because it's perishable), Mutual credit (no interest because it's abundant)
maaku
Legendary
*
expert
Offline Offline

Activity: 905
Merit: 1011


View Profile
December 09, 2013, 07:44:06 PM
 #107

I think the point of being zero knowledge is that you don't have to download the blockchain, no?

I'm an independent developer working on bitcoin-core, making my living off community donations.
If you like my work, please consider donating yourself: 13snZ4ZyCzaL7358SmgvHGC9AxskqumNxP
iddo
Sr. Member
****
Offline Offline

Activity: 360
Merit: 251


View Profile
December 09, 2013, 07:49:08 PM
 #108

I think the point of being zero knowledge is that you don't have to download the blockchain, no?

Maybe you're confusing zero-trust with zero-knowledge?
maaku
Legendary
*
expert
Offline Offline

Activity: 905
Merit: 1011


View Profile
December 09, 2013, 09:39:20 PM
 #109

Maybe. The proof needs to be verifiable without any blockchain data. Regular SNARKS can do this?

I'm an independent developer working on bitcoin-core, making my living off community donations.
If you like my work, please consider donating yourself: 13snZ4ZyCzaL7358SmgvHGC9AxskqumNxP
MatthewLM
Legendary
*
Offline Offline

Activity: 1190
Merit: 1004


View Profile
December 09, 2013, 10:59:44 PM
 #110

I was thinking about this myself. Could you use a zero-knowledge proof to validate the block-chain without needing the actual block-chain? I don't know very much about it at all.
MatthewLM
Legendary
*
Offline Offline

Activity: 1190
Merit: 1004


View Profile
December 09, 2013, 11:22:38 PM
 #111

I read part of the paper. In layman's terms, the "GHOST" parent selection of blocks takes into account the work of all the blocks that are children of a block which is a candidate to be made parent. Therefore it takes into full account the hash distribution of the network. In the current system, if the time between blocks was reduced significantly, then blocks which are propagated slowly will be placed on a side-chain as they will lose out in the propagation race, therefore, unlike the "GHOST" system, the current system cannot take the full hash-distribution into account.

Now if only that was explained in the abstract, it would have saved me so much time!
iddo
Sr. Member
****
Offline Offline

Activity: 360
Merit: 251


View Profile
December 10, 2013, 12:44:16 AM
 #112

Maybe. The proof needs to be verifiable without any blockchain data. Regular SNARKS can do this?

I was thinking about this myself. Could you use a zero-knowledge proof to validate the block-chain without needing the actual block-chain? I don't know very much about it at all.

The answer is yes, but why do you guys keep resurrecting zero-knowledge back into this discussion?

Here's a talk from the 2013 Bitcoin conference by my PhD supervisor about it: http://www.youtube.com/watch?v=YRcPReUpkcU

You can think of it as some specified C or assembler source code having a similar role to that of ECDSA pubkey, and a succinct form of an execution of this source code as having a similar role to that of ECDSA signature. Given this "pubkey", everyone can verify that the "signature" indeed corresponds to a valid execution of that C/assembler program and that this execution terminated with the required output. The probability that an invalid "signature" will pass this verification algorithm is negligible. So in order to "compress" the blockchain, we specify the C/assembler code that checks the blocks from genesis until a certain checkpoint block, and compute a "signature" for an execution this computation, so that nodes can verify with zero-trust that this checkpoint is valid without downloading all the blocks since genesis, and without the need to fetch any blockchain data (to answer maaku's question). Saying that this "signature"/proof is zero-knowledge just means that it doesn't reveal any additional information besides the single bit that says whether in this execution all the blockchain blocks indeed passed validation (an example of additional info is, say, that this "signature"/proof leaks that Alice sent Bob 5 coins at some block). Why do you guys care about zero-knowledge in this regard?
jtimon
Legendary
*
Offline Offline

Activity: 1372
Merit: 1002


View Profile WWW
December 10, 2013, 11:53:57 AM
 #113

Why do you guys care about zero-knowledge in this regard?

I think we just tend to confuse snark/scip with zero knowledge.

2 different forms of free-money: Freicoin (free of basic interest because it's perishable), Mutual credit (no interest because it's abundant)
ShadowOfHarbringer
Legendary
*
Offline Offline

Activity: 1470
Merit: 1005


Bringing Legendary Har® to you since 1952


View Profile
December 10, 2013, 02:26:19 PM
 #114

* ShadowOfHarbringer is watching this.

MatthewLM
Legendary
*
Offline Offline

Activity: 1190
Merit: 1004


View Profile
December 10, 2013, 03:59:43 PM
 #115

Why do you guys care about zero-knowledge in this regard?

Because nodes wouldn't need to store and validate the block-chain. It only needs to be validated once. The output of the validation could be the unspent outputs, so that all miners need is the last block alongside the unspent outputs. Then they can build upon that. Non-miners would simply verify the proofs in the blocks and take the unspent outputs relevant to them. Why is this not a good idea?

How would anyone actually start developing a crypto-currency that worked in such a way? It all seems very theoretical to me.
iddo
Sr. Member
****
Offline Offline

Activity: 360
Merit: 251


View Profile
December 10, 2013, 08:41:58 PM
 #116

The output of the validation could be the unspent outputs

Long output will blowup the complexity, the output can be boolean, where the hash the UTXO set is an input.

Why is this not a good idea?

It's a great idea, but what does it got to do with zero-knowledge?

How would anyone actually start developing a crypto-currency that worked in such a way? It all seems very theoretical to me.

When an efficient SNARK implementation is available, there's a good chance that the Bitcoin devs will integrate this option.
MatthewLM
Legendary
*
Offline Offline

Activity: 1190
Merit: 1004


View Profile
December 10, 2013, 09:21:57 PM
 #117

It's a great idea, but what does it got to do with zero-knowledge?

I suppose it's partially zero-knowledge as you wouldn't need all of the inputs to the program. I did read somewhere that the SNARK programs can have public inputs as well as arbitrary inputs which are unknown to verifiers. Is that right? Some details would be hidden to the verifier, such as the transactions.
jtimon
Legendary
*
Offline Offline

Activity: 1372
Merit: 1002


View Profile WWW
December 11, 2013, 12:53:17 AM
 #118

The output of the validation could be the unspent outputs

Long output will blowup the complexity, the output can be boolean, where the hash the UTXO set is an input.

I thought it would be something like

inputs: prev_utxo_tree_hash, transaction set for current block
output: utxo_tree_hash

or

F(prev_utxo_tree_hash, transaction set for current block) = utxo_tree_hash

I did read somewhere that the SNARK programs can have public inputs as well as arbitrary inputs which are unknown to verifiers. Is that right? Some details would be hidden to the verifier, such as the transactions.

If this is correct then another node would just need prev_utxo_tree_hash, utxo_tree_hash and the signature of the execution of F to validate that the block was correctly processed. Doesn't even need the transactions.

Sorry again if I'm saying something stupid about snark, this is still like magic to me so it's hard to retain what can and cannot be done.

2 different forms of free-money: Freicoin (free of basic interest because it's perishable), Mutual credit (no interest because it's abundant)
Cryddit
Legendary
*
Offline Offline

Activity: 924
Merit: 1122


View Profile
December 11, 2013, 03:34:27 AM
 #119

As I understand it, you generate a verifiable 'trace signature' of a program run which bears a distinctive cryptographic relationship with the code itself; it can be considered to be 'signed' by the code used to create it.

But GIGO still applies.  Given the same starting point, the same code could run with different input, then you can give 'trace signatures' to two different people to verify, and have two people verify that the code ran correctly and those two people will still have different results. 

However, for some data structures, if you are given the starting point, the 'trace signature' AND the results, you can verify that the starting point was transformed into the results via the 'signed' run of the code, even if that valid run had unknown inputs.  This is possible with some structures and not others, but because of the cryptographic chain-of-evidence qualities, the Bitcoin Blockchain is one of the finest examples of a structure that it *DOES* work with.

But I think this is going rather far afield from the original intent of this thread, which was to make Bitcoin able to handle more transactions per second.
halvoraarud
Newbie
*
Offline Offline

Activity: 11
Merit: 0


View Profile
December 12, 2013, 04:42:43 PM
 #120

You guys have made the biggest contribution to the biggest problem with cryptocurrencies, transaction times.
Now we just need to implement it + zerocoin support. Ahh how i love open source.

I would also like to see some kind of community interest rate in the currency, similar to what was outlined here: https://docs.google.com/document/d/1302Fb5BpgTRS2P-snN4ys5JR3l8QU2QhpncldAYaXfM/edit?usp=sharing

No currency can have limited supply and stable prices.

Besides that, I think we are on to a winning formula, yes.

HA
maaku
Legendary
*
expert
Offline Offline

Activity: 905
Merit: 1011


View Profile
December 12, 2013, 06:23:13 PM
 #121

There is no stable, secure way to manage interest rates that cannot be trivially gamed by participants, including that proposal you linked to. You're not the first to consider this problem. If you're of the opinion that some inflation/interest is desireable, you might be intersted in Freicoin. But this is wandering far afield from the topic.

I'm an independent developer working on bitcoin-core, making my living off community donations.
If you like my work, please consider donating yourself: 13snZ4ZyCzaL7358SmgvHGC9AxskqumNxP
eggdescrambler
Newbie
*
Offline Offline

Activity: 43
Merit: 0


View Profile WWW
December 13, 2013, 02:49:58 PM
 #122




Have you seen this paper?

 http://eprint.iacr.org/2012/248.pdf

It sounds like, with what is depicted in section 5.3, that once this double spending alert is implemetned, just waiting for 15 seconds for receiving a payment without any conflict would be sufficient to be confident, isn't it?
maaku
Legendary
*
expert
Offline Offline

Activity: 905
Merit: 1011


View Profile
December 13, 2013, 07:34:25 PM
 #123

No. That paper makes the flawed assumption that double-spends must be propagated on the network. On the contrary, someone executing a double spend could own or have some relationship with one or more large mining pools and send the double-spend to them directly. No one would know about the double spend until it is already confirmed in the next block.

I'm an independent developer working on bitcoin-core, making my living off community donations.
If you like my work, please consider donating yourself: 13snZ4ZyCzaL7358SmgvHGC9AxskqumNxP
eggdescrambler
Newbie
*
Offline Offline

Activity: 43
Merit: 0


View Profile WWW
December 13, 2013, 09:53:48 PM
 #124

Ok, I see what you mean, only those large pool would know about the double spend.
So this large pool would need to be in bed with the double spender.


No. That paper makes the flawed assumption that double-spends must be propagated on the network. On the contrary, someone executing a double spend could own or have some relationship with one or more large mining pools and send the double-spend to them directly. No one would know about the double spend until it is already confirmed in the next block.
eggdescrambler
Newbie
*
Offline Offline

Activity: 43
Merit: 0


View Profile WWW
December 13, 2013, 10:55:58 PM
 #125


How important is that threat? I mean, how large is the largest pool and how much share of the HASH computing power does it have to sort of attempt to quantify the probability of this threat?

Ok, I see what you mean, only those large pool would know about the double spend.
So this large pool would need to be in bed with the double spender.


No. That paper makes the flawed assumption that double-spends must be propagated on the network. On the contrary, someone executing a double spend could own or have some relationship with one or more large mining pools and send the double-spend to them directly. No one would know about the double spend until it is already confirmed in the next block.
maaku
Legendary
*
expert
Offline Offline

Activity: 905
Merit: 1011


View Profile
December 14, 2013, 02:08:47 AM
 #126

The largest pools are about 30% of the network. That would give them a 1-in-3 chance of pulling it off, which is more than enough for some scenarios.

EDIT: The paper also assumes that once one transaction has been seen, a node will always reject a double-spending transaction. This is also false. Nothing prevents a self-serving operator from discarding the initial transaction and accepting the double-spend (because it has a higher fee, for example).

I'm an independent developer working on bitcoin-core, making my living off community donations.
If you like my work, please consider donating yourself: 13snZ4ZyCzaL7358SmgvHGC9AxskqumNxP
eggdescrambler
Newbie
*
Offline Offline

Activity: 43
Merit: 0


View Profile WWW
December 15, 2013, 10:28:56 PM
 #127


Thanks

Still, having this double spend alert would reduce the current threat, making the attack much more difficult to achieve.
What's the hold up in waiting to add this double spend alert?

Other than this paper that started this thread, what are the other possible that are being investigated to improve bitcoin's transaction speed?

Thanks very much for your reply.


The largest pools are about 30% of the network. That would give them a 1-in-3 chance of pulling it off, which is more than enough for some scenarios.

EDIT: The paper also assumes that once one transaction has been seen, a node will always reject a double-spending transaction. This is also false. Nothing prevents a self-serving operator from discarding the initial transaction and accepting the double-spend (because it has a higher fee, for example).
billotronic
Legendary
*
Offline Offline

Activity: 1610
Merit: 1000


Crackpot Idealist


View Profile
December 15, 2013, 11:38:57 PM
 #128

Yes. There is things hiding in the dark... us shrills were actually the team of 'second gunmen' on the Grassy Knoll.

There its open. ha.

This post sums up why all this bullshit is a scam
Read It. Hate It. Change the facts that it represents.
https://bitcointalk.org/index.php?topic=1606638.msg16139644#msg16139644
eggdescrambler
Newbie
*
Offline Offline

Activity: 43
Merit: 0


View Profile WWW
December 18, 2013, 12:40:35 AM
 #129

Yes, very sad

I would love to find out what's holding up bitcoin coders from integrating this double spending alert which they talk about in section 8.5 ?
http://eprint.iacr.org/2012/248.pdf
Not that it would resolve the issue of compromised mining pool, but still, it will significantly reduce the issue.


We need a moderator to step in and remove all this offtopic.

Or at least split the topic.

gmaxwell, do it please.
iddo
Sr. Member
****
Offline Offline

Activity: 360
Merit: 251


View Profile
December 18, 2013, 09:40:32 AM
Last edit: December 18, 2013, 02:09:52 PM by iddo
 #130

F(prev_utxo_tree_hash, transaction set for current block) = utxo_tree_hash

I meant that it'd probably have less complexity to do: F(prev_utxo_tree_hash, transaction set for current block, utxo_tree_hash) == 1

It's a great idea, but what does it got to do with zero-knowledge?

I suppose it's partially zero-knowledge as you wouldn't need all of the inputs to the program. I did read somewhere that the SNARK programs can have public inputs as well as arbitrary inputs which are unknown to verifiers. Is that right? Some details would be hidden to the verifier, such as the transactions.

Your attitude is too cavalier... Just because you weren't supposed to know the non-public inputs doesn't mean that those inputs don't leak when you examine the SNARK proof, you need a zero-knowledge proof to guarantee that nothing leaks besides the output.
But for the last time, why do you care whether the proof is zero-knowledge or not in this context?
Etlase2
Hero Member
*****
Offline Offline

Activity: 798
Merit: 1000


View Profile
December 18, 2013, 02:41:12 PM
 #131

Your attitude is too cavalier... Just because you weren't supposed to know the non-public inputs doesn't mean that those inputs don't leak when you examine the SNARK proof, you need a zero-knowledge proof to guarantee that nothing leaks besides the output.
But for the last time, why do you care whether the proof is zero-knowledge or not in this context?

I think the distinction of must-not-have and not-necessarily-need of ZKP vs. SNARK is being lost here. Probably not necessary to keep asking this question.

eduffield
Legendary
*
Offline Offline

Activity: 1176
Merit: 1036


Dash Developer


View Profile WWW
December 18, 2013, 03:52:19 PM
 #132

Is there an implementation of this that someone is working on? I'd like to help implement it

Dash - Digital Cash | dash.org | dashfoundation.io | dashgo.io
avivz78 (OP)
Newbie
*
Offline Offline

Activity: 21
Merit: 0


View Profile WWW
December 18, 2013, 04:07:28 PM
 #133

Is there an implementation of this that someone is working on? I'd like to help implement it

We'd be happy to cooperate with anyone who wants to have a go at an implementation. While we have poked around in the bitcoin source code a bit, I'm afraid we're not familiar enough with the code to pull this off easily ourselves.

I'm also very interested in establishing a testnet that will be stress tested with many transactions per second. If you have an idea on how to do this, I'd be happy to know.

Feel free to email me directly regarding any of these projects.
LurbQBurdock
Newbie
*
Offline Offline

Activity: 24
Merit: 0



View Profile
December 18, 2013, 05:51:54 PM
Last edit: December 18, 2013, 06:47:29 PM by LurbQBurdock
 #134

I've been reading through the bitcoin source code.  I'm interested in working with anyone who tries to implement this.  Here's what I've found:

All the relevant code seems to be in main.cpp and main.h.

As currently implemented, we can traverse the blocktree from leaves->root, but Aviv and Yonatan's algorithm requires us to be able to traverse the blocktree from root->leaves.  The first thing that needs to be done is that we need to rewrite the tree to make it traversable in both directions.

After this, the code to determine which block is the "BestBlock" to append a new child block to needs to be pretty much completely rewritten.  The code currently computes the ChainWork (the amount of work needed to compute the current block and all blocks above the current block on the tree) of each block in the tree.  It then searches through the entirety of (the still valid parts of) the tree for the block that has the largest ChainWork, and it assigns this block as the BestBlock.  Instead, we need to traverse the tree from root->leaves, and compare the work in each subtree whenever we come to a fork.

Also, the code to invalidate and discard blocks on the tree needs to be rewritten.  I'm lost on how this works and how it should be rewritten.
TierNolan
Legendary
*
Offline Offline

Activity: 1232
Merit: 1077


View Profile
December 18, 2013, 06:04:40 PM
 #135

The way the client calculates the proof of work for each block is that it does the calculation only once.

Effectively, each block has a number which means "Proof of work for the chain, if this block was the leaf".

It also has a PoW for best chain variable.  If you add a block and it is higher than the best chain, then it has to set that chain as the best.

Under your system, when a new block is added, it would be more complex.

Code:
A <- B <- C  <- D
     \ <- C' <- D'
          \  <- D''

When comparing to D' to D, D' would get extra weight from D''.  However, when comparing D' to D'', it wouldn't get the extra weight.  There isn't one "work" number for each node.

1LxbG5cKXzTwZg9mjL3gaRE835uNQEteWF
avivz78 (OP)
Newbie
*
Offline Offline

Activity: 21
Merit: 0


View Profile WWW
December 18, 2013, 09:22:58 PM
 #136

I've been reading through the bitcoin source code.  I'm interested in working with anyone who tries to implement this.  Here's what I've found:

All the relevant code seems to be in main.cpp and main.h.

As currently implemented, we can traverse the blocktree from leaves->root, but Aviv and Yonatan's algorithm requires us to be able to traverse the blocktree from root->leaves.  The first thing that needs to be done is that we need to rewrite the tree to make it traversable in both directions.

After this, the code to determine which block is the "BestBlock" to append a new child block to needs to be pretty much completely rewritten.  The code currently computes the ChainWork (the amount of work needed to compute the current block and all blocks above the current block on the tree) of each block in the tree.  It then searches through the entirety of (the still valid parts of) the tree for the block that has the largest ChainWork, and it assigns this block as the BestBlock.  Instead, we need to traverse the tree from root->leaves, and compare the work in each subtree whenever we come to a fork.

Hi, thanks for taking a look! I think it's a good idea to map out all the changes that need to take place.

Comments:

1) I might be wrong about this, because I haven't looked at the implementation too carefully, but in principle, I think forward and backward traversal of the chain should already be possible: as far as I'm aware, the unspent transaction output set needs to be rolled back if a recent part of the chain is replaced. If I'm not mistaken (and somebody please correct me if I am) the unspent outputs are only maintained for the "best block" and if this is replaced by an alternate chain, it isn't built from scratch.

2) There is also a need to add hashes (or other identifiers) of newly discovered off-chain blocks into each block ("newly discovered" here means: not included inside any other previously reachable block). Nodes must then request such blocks if they do not already have them, and the propagation mechanisms / inv messages need to be adjusted accordingly.

Also, the code to invalidate and discard blocks on the tree needs to be rewritten.  I'm lost on how this works and how it should be rewritten.

3) I'm not sure what you mean by "the code to invalidate and discard blocks". Can you explain in more detail?

avivz78 (OP)
Newbie
*
Offline Offline

Activity: 21
Merit: 0


View Profile WWW
December 18, 2013, 09:44:19 PM
 #137

The way the client calculates the proof of work for each block is that it does the calculation only once.

Effectively, each block has a number which means "Proof of work for the chain, if this block was the leaf".

It also has a PoW for best chain variable.  If you add a block and it is higher than the best chain, then it has to set that chain as the best.

Under your system, when a new block is added, it would be more complex.

Code:
A <- B <- C  <- D
     \ <- C' <- D'
          \  <- D''

When comparing to D' to D, D' would get extra weight from D''.  However, when comparing D' to D'', it wouldn't get the extra weight.  There isn't one "work" number for each node.

You are right, it may not be as simple as just keeping the one number of "proof for the chain as if block was a leaf" but in principle the following approach works (and its refinement below is probably much more efficient):

Approach 1 (very naive):
1) For each block, save the total amount of work in its subtree. Let us denote this saved value by W(X)=total work of subtree rooted at block X.

In your example above: the subtree of block C includes blocks C,D. The subtree of block C' includes C',D',D''.
2) When a new block arrives, simply increment W(X) for all blocks X along the path to the genesis by the amount of work represented by this new block.
3) If the new block was off-chain, find its connection to the main chain and check if a change to the chain is needed.

(the 2nd step is computationally expensive so the following refinement can be done)

Approach 2 (Lazy evaluation):
The idea here is to still maintain values for total work in the subtree rooted at each block, but to do the updates only when required.
For each block on the main chain we maintain a lower bound on the work in its subtree, for each node off-chain we maintain an exact number. The idea being that off-chain branches die out after a relatively short while, and on-chain blocks that are sufficiently deep in the tree do not need to be examined often.

Invariants: W(X) for any block X that is off chain is exact, W(X) for any block X that is on the main chain is a lower-bound on the real value.

The updates:

When a new block is added:
1) If it extends the current chain, do not update W(X) for any block.
2) Otherwise, if it is off-chain then traverse its path (towards the genesis) until it meets the main chain, and increment the work W(X) of each subtree along the way accordingly. Also traverse the main chain up to this point and update W(X) for each node along the way.
3) check to see if a change of chain is needed. If it is, the off-chain values are exact and can be used to find the new best leaf by following the greedy path towards the leaves.

I hope I explained this well enough for you to at least get the basic idea... Please let me know if I didn't, or if you spot a mistake.
LurbQBurdock
Newbie
*
Offline Offline

Activity: 24
Merit: 0



View Profile
December 18, 2013, 10:43:06 PM
 #138

1) I might be wrong about this, because I haven't looked at the implementation too carefully, but in principle, I think forward and backward traversal of the chain should already be possible: as far as I'm aware, the unspent transaction output set needs to be rolled back if a recent part of the chain is replaced. If I'm not mistaken (and somebody please correct me if I am) the unspent outputs are only maintained for the "best block" and if this is replaced by an alternate chain, it isn't built from scratch.
class CBlockIndex has pointers pprev and pnext that point to the parent block and the child along the main chain.  The main chain can be traversed forward and backward, but the entirety of the tree cannot.  I see no reason why this can't be changed.

2) There is also a need to add hashes (or other identifiers) of newly discovered off-chain blocks into each block ("newly discovered" here means: not included inside any other previously reachable block). Nodes must then request such blocks if they do not already have them, and the propagation mechanisms / inv messages need to be adjusted accordingly.
Yes, this would also need to be done in order to implement your proof-of-work difficulty-adjustment algorithm, right?  But it's not required for GHOST?  I have not started looking into how to implement this.  I am worried about the bloat that this would cause if the entire tree must be stored though.  Have you looked into how your algorithm would fare if we ignored off-chain blocks for difficulty-adjustment?

3) I'm not sure what you mean by "the code to invalidate and discard blocks". Can you explain in more detail?

I was originally under the impression that the parts of the blocktree that are not on the main chain get deleted from memory, but now I see that what I was looking at was code to invalidate blocks if the header is not correct.  I am not sure if the entire tree is kept in memory or if parts are eventually discarded.

By the way, I'm waiting to hear back from my friends before I email you back.

avivz78 (OP)
Newbie
*
Offline Offline

Activity: 21
Merit: 0


View Profile WWW
December 18, 2013, 11:52:34 PM
 #139

2) There is also a need to add hashes (or other identifiers) of newly discovered off-chain blocks into each block ("newly discovered" here means: not included inside any other previously reachable block). Nodes must then request such blocks if they do not already have them, and the propagation mechanisms / inv messages need to be adjusted accordingly.
Yes, this would also need to be done in order to implement your proof-of-work difficulty-adjustment algorithm, right?  But it's not required for GHOST?  I have not started looking into how to implement this.  I am worried about the bloat that this would cause if the entire tree must be stored though.  Have you looked into how your algorithm would fare if we ignored off-chain blocks for difficulty-adjustment?

This is actually required for GHOST, so that nodes have the same world-view and decide on the same leaf -- the off-chain blocks affect your choice of leaf. It is especially important for nodes that were offline for a while and want to catch up. You may in theory be able to do without it, but the convergence time to the same chain would take longer.

Regarding bloat: This is not a difficult issue. First, there is only potential bloat once propagation delays are significantly higher than block creation rates, but more importantly,
The contents of off-chain blocks can be discarded after a short while (e.g., after 1 day). Only the header is needed to verify the difficulty was met. If a switch is needed to some previously discarded subtree, then the node that triggers it, can supply the contents of the blocks as well.
LurbQBurdock
Newbie
*
Offline Offline

Activity: 24
Merit: 0



View Profile
December 19, 2013, 12:54:47 AM
Last edit: December 19, 2013, 03:14:09 AM by LurbQBurdock
 #140

Regarding bloat: This is not a difficult issue. First, there is only potential bloat once propagation delays are significantly higher than block creation rates, but more importantly,
The contents of off-chain blocks can be discarded after a short while (e.g., after 1 day). Only the header is needed to verify the difficulty was met. If a switch is needed to some previously discarded subtree, then the node that triggers it, can supply the contents of the blocks as well.
 OK.  This works for me.
eduffield
Legendary
*
Offline Offline

Activity: 1176
Merit: 1036


Dash Developer


View Profile WWW
December 19, 2013, 01:57:23 PM
 #141

To anyone interested in helping implement ghost:

I'm going to start implementing the changes needed to traverse the tree in the reverse direction and then implement GHOST. If anyone wants to help, feel free to submit pull requests:

https://github.com/evan82/bitcoin-ghost

Dash - Digital Cash | dash.org | dashfoundation.io | dashgo.io
LurbQBurdock
Newbie
*
Offline Offline

Activity: 24
Merit: 0



View Profile
December 19, 2013, 04:47:28 PM
 #142

Aviv & Yonatan:  Could GHOST work in the situation that transaction & block propagation across the whole network happens much slower than block generation?  For instance, in inter-planetary distances?  We'd expect blocks to be made on Mars every few seconds but each would not make it to Earth for a few minutes.
halvoraarud
Newbie
*
Offline Offline

Activity: 11
Merit: 0


View Profile
December 20, 2013, 12:04:07 AM
 #143

Aviv & Yonatan:  Could GHOST work in the situation that transaction & block propagation across the whole network happens much slower than block generation?  For instance, in inter-planetary distances?  We'd expect blocks to be made on Mars every few seconds but each would not make it to Earth for a few minutes.

The only mining on Mars should be minerals Smiley
avivz78 (OP)
Newbie
*
Offline Offline

Activity: 21
Merit: 0


View Profile WWW
December 21, 2013, 07:49:15 PM
 #144

Aviv & Yonatan:  Could GHOST work in the situation that transaction & block propagation across the whole network happens much slower than block generation?  For instance, in inter-planetary distances?  We'd expect blocks to be made on Mars every few seconds but each would not make it to Earth for a few minutes.

It would work, in the sense that the 50% attack remains hard to carry out, but the number of transactions per second would be really low. Too low to be of any use.
We're working on an extension that would fix that issue as well. It's a bit too early to tell if we'll be able to close all the holes. I haven't thought of it as a solution for an interplanetary currency though  Smiley
King Lear
Newbie
*
Offline Offline

Activity: 23
Merit: 0


View Profile
January 05, 2014, 03:17:29 PM
 #145

Hi,

Let's reexamine proposition 8.1 and claim 8.2 of the paper, that say there will always eventually be a convergence ("collapse") to the next chain block and this convergence takes in expectation a finite time.

The proof is based on the fact that for any finite interval of time, there is a positive constant probability of the event where no new block is created anywhere in the network during a period of time of this interval's length. The problem is that you implicitly assume all miners immediately release any block they have found, and we know this is untrue. So you cannot simply derive that on some point all the nodes in the network will have exactly the same information without assuming that all nodes are honest!

If the average time of creating a new block in the tree is longer than the network diameter, your proof is nevertheless valid. Else, theoretically there is a positive probability that the network will never come to a point of sharing exactly the same information, since withholded blocks might be released once every d seconds where d is the network's diameter.
 
Now let me describe a possible DoS attack that tries to prevent convergence. We take the approach of the paper to have a simplified network of only two honest nodes, while noting that a similar attack is applicable to a more complex network. Say we have two honest nodes whose connection having a delay of d that is relatively big compared with the mean time of block creation.

Say we have two more nodes controlled by the same attacker, each one is adjacent to one of the two honest nodes, so that the delay between the pair is negligible compared to d. The two attacker's nodes may be connected by a special secret channel whose delay is lower than d, or may have no direct connection at all. Having such a special connection can only reduce the attacker's required fraction of total hash-power, but this is not crucial for the attack success. (Nevertheless I think it is most probable that such a faster channel may exist, because in reality the attacker can skip validation of directly transmitted data between his nodes, or even send only data about the number of created blocks instead of the block themselves. On the other hand those two honest nodes are actually two "centers" of the network and the delay between them is caused by having a path of at least several nodes between them…).

Let the two honest nodes have about the same hash-power, and the two attacker's nodes have relatively small hash-power compared with the honest nodes. Due to the delay, one honest node is informed only of his blocks and the released blocks of his adjacent attacker, as well as the blocks of the other honest node and the released blocks of the other attacker's node that were not released at the last d seconds. Therefore it is possible that the other honest node is working on a branch that is actually heavier that the branch the first honest node is working on, because first honest node wrongly thinks that his branch is the heavier. The attacker is trying to always fool the honest node that mines on the less heavy branch, so that the honest nodes will never converge to the same branch.

In order to analyze the required hash-power of the attacker needed so that with a positive probability there will never occur a "collapse" (in contradiction to proposition 8.1), we shall slightly modify our model so it will be easier to mathematically analyze it: instead of saying a honest node will continue to mine on his initial branch until a point where the competitive branch has been heavier already d seconds before, we say that he continues to mine on his initial branch unless the other branch is heavier by k blocks or more (ignoring any delays). In a sense having a delay can be shown to be equivalent of having such k > 0. Moreover, on reality there is no such fixed delay d, so I am not sure that the model of having a fixed corresponding k is less realistic than having a fixed delay d.

The attack goes as follows: each of the attacker's nodes keeps mining in secret on top of the branch of its adjacent honest node, and whenever the competitive second branch is reaching a tie from the point of view of the two nodes working on the first branch, meaning it is actually leading by k blocks) the attacker's node release a single block that breaks the tie in favor of its adjacent honest node. So the question is what is the minimal required hash-power of the attacker so that there is a positive probability that he will never run out of blocks when he needs to release a new block? (In fact the attacker might temporarily run out of blocks, so the network will temporarily collapse to one of the branches, but the attacker may later succeed mining the required block to reverse the collapse. We ignore that for simplicity).

Mathematically, we have a random walking on the 2k-1 numbers: 0, +-1, +-2, … +-(k-1). Each step is forward with probability 1/2 and backward with probability 1/2, where "forward" of (k-1) is regarded as (k-1) and backward of -(k-1) is regarded as -(k-1), as long as the attacker hasn’t run out of secret blocks. This simple Markov process can be shown to define a uniform distribution over the set of legal positions, therefore each of the attacker's nodes have to release a single block every 2(2k-1) blocks in expectation. So the minimum required hash rate of the attacker is 1/(2k-1) of the total honest hash-power, or 1/2k of the total network's hash-power.

The bigger is the delay d with respect to the average time it takes to create a new block, the bigger is k, and the closer to zero is the required hash-power of a successful attacker. Therefore I am afraid that 1 second per block-chain block (which is less than 1 second per a block in the tree!!) is just too much. Yet I am sure GHOST is a revolutionary proposal that can much improve the very low current rate of 10 minutes per block.

The last thing I would like to note is that vulnerability to the same kind of convergence attack isn’t unique to GHOST, but to any system whose delay is relatively high compared with the average time per block. If Bitcoin blocks will be *much* increased (a lot more than the current 1MB maximum) and as a result the delay will increase too, it will be possible to attack Bitcoin similarly.

Yet a major difference is that in Bitcoin the attacker cannot keep secret blocks and release them later, as this will have no effect. Therefore the attacker should always mine on top of the shorter chain, and hopes that the difference between the lengths of the competitive chains will not grow too much. Theoretically it can be shown that no matter how much hash-power the attacker has, eventually this gap will reach any natural number. As this might take a very long time, I still consider such an attack valid.


Lear
King Lear
Newbie
*
Offline Offline

Activity: 23
Merit: 0


View Profile
January 05, 2014, 03:26:15 PM
 #146

Hi,

I have post my suggestion for GHOST implementation, aiming to solve the centralization/unfairness of mining problem, and to reduce the vulnerability to convergence attacks (by making it much more difficult to effectively release old-mined blocks). Here is the link to the thread, and I will appreciate any of your comments:  https://bitcointalk.org/index.php?topic=396350.msg4278073#msg4278073.
 
Lear.
coinrevo
Member
**
Offline Offline

Activity: 70
Merit: 10


View Profile
January 05, 2014, 05:05:10 PM
 #147

Could someone explain to me how the nodes should select the heaviest subtree, when they don't have that information? It requires information from all(!) other nodes. How can this even make sense, because one node is connected to say 8 peers and not the entire network?
maaku
Legendary
*
expert
Offline Offline

Activity: 905
Merit: 1011


View Profile
January 05, 2014, 06:41:36 PM
 #148

Could someone explain to me how the nodes should select the heaviest subtree, when they don't have that information? It requires information from all(!) other nodes. How can this even make sense, because one node is connected to say 8 peers and not the entire network?

It's done based on local information.

I'm an independent developer working on bitcoin-core, making my living off community donations.
If you like my work, please consider donating yourself: 13snZ4ZyCzaL7358SmgvHGC9AxskqumNxP
coinrevo
Member
**
Offline Offline

Activity: 70
Merit: 10


View Profile
January 05, 2014, 08:39:11 PM
 #149

It's done my magic. All the magic nodes transmit information faster than speed of light. They connect to parallel universes. Each universe is a uni-node, with probability of exactly 1/n. Due to quantum emission wormholes eliminate cost of friction, so miners can get the coinbase reward. IP address assignment by the federal reserve eliminates the so called -31.23% attack. I can prove this theorem using advanced stochastical analysis, but only for lambda smaller than 0.03.
prezbo
Sr. Member
****
Offline Offline

Activity: 430
Merit: 250


View Profile
January 05, 2014, 09:16:26 PM
 #150

Could someone explain to me how the nodes should select the heaviest subtree, when they don't have that information? It requires information from all(!) other nodes. How can this even make sense, because one node is connected to say 8 peers and not the entire network?

Blocks are propagated across the network just like transactions are, so where exactly do you see a problem?
TierNolan
Legendary
*
Offline Offline

Activity: 1232
Merit: 1077


View Profile
January 05, 2014, 10:46:43 PM
 #151

Could someone explain to me how the nodes should select the heaviest subtree, when they don't have that information? It requires information from all(!) other nodes. How can this even make sense, because one node is connected to say 8 peers and not the entire network?

Block headers could be broadcast for orphaned blocks.  However, it is possible that 2 nodes will have different headers accumulated.

With a linear blockchain, proof of work is always exactly agreed by all miners.

Since all block headers are linked.  All nodes can agree on the proof of work linked to a particular header.  If a node was missing any blocks on that chain, then it wouldn't be able to form the chain at all.

With GHOST, 2 nodes could disagree about the proof of work for a particular chain.  There is no way to be sure that you have all the orphans.

In practice, this isn't really a problem, because the 2 rules almost always give the same answer.

If you did push the the block rate to a point where network latency is a problem, you could have 2 chains where there is disagreement.

Ensuring that all headers are well distributed between nodes is critical.  Disagreements are not likely to last long anyway.

1LxbG5cKXzTwZg9mjL3gaRE835uNQEteWF
iddo
Sr. Member
****
Offline Offline

Activity: 360
Merit: 251


View Profile
January 05, 2014, 11:42:30 PM
 #152

In practice, this isn't really a problem, because the 2 rules almost always give the same answer.

If this was true then using GHOST with all its added complexities would be pointless.
prezbo
Sr. Member
****
Offline Offline

Activity: 430
Merit: 250


View Profile
January 05, 2014, 11:45:15 PM
 #153

In practice, this isn't really a problem, because the 2 rules almost always give the same answer.

If this was true then using GHOST with all its added complexities would be pointless.
It was probably meant as they give the same answer as things are at the moment, but they wouldn't if the number of blocks per unit of time would be increased significantly.
TierNolan
Legendary
*
Offline Offline

Activity: 1232
Merit: 1077


View Profile
January 06, 2014, 12:48:35 AM
 #154

It was probably meant as they give the same answer as things are at the moment, but they wouldn't if the number of blocks per unit of time would be increased significantly.

Right, with a 10 minute block time, there are so few orphans that there is total agreement.

If the block time was 1 second, then there would be a huge number of orphans.

Some system for quickly finding the leaf block is important.  Each time a header is added, you have to update an entire tree.

An analysis of the optimal strategy for miners under GHOST would be important.

For bitcoin, broadcasting new blocks as fast as possible is optimal.  Is that still true for GHOST.

With GHOST, there would be lots of ties and near ties, so colluding to break ties in favour of a group may give those in the group an advantage.

1LxbG5cKXzTwZg9mjL3gaRE835uNQEteWF
asdf
Hero Member
*****
Offline Offline

Activity: 527
Merit: 500


View Profile
January 06, 2014, 01:11:26 AM
 #155


One thing I note is that there are lots of ways to optimise block wire representations. Sending blocks as lists of hashes, for example, would use 32 byte hashes in our current code just because it's lazily reusing the Bloom filtering path. But 32 bytes is massive overkill for this. You only need to distinguish between transactions in the mempool. You could easily drop to 16, 8 or even 4 byte hashes (hash prefixes) and still be able to disambiguate what the peers block message contains very reliably. Indeed they could be varint encoded so the hash is only as long as it needs to be given current traffic loads. Malicious actors might try to create collisions to force block propagation times up, but if peers negotiate random byte subsets upon connect for relaying of block contents then such an attack becomes impossible.


I thought of this also. How do you deal with collisions, though? What if the receiver of a hash list knows a transaction whose prefix collides with one in the list, but isn't in the list itself?

Would you be including the prefixes of the merkle tree nodes too? These would serve as a checksum, so the receiver would at least know which transaction has the colliding prefix. In fact, with the tree nodes included, you could be much less conservative with the prefix length, sending 2 or event 1 byte(s).

Perhaps that makes no sense. I'm pretty noobish with the protocol specifics.

I'm very interested in reducing block propagation times because I believe slow times will eventually drive transaction costs to uncompetitive levels. This is due to the inherent cost of "orphan risk" which comes from including transactions. see here https://bitcointalk.org/index.php?topic=334284.msg3669703#msg3669703.



TierNolan
Legendary
*
Offline Offline

Activity: 1232
Merit: 1077


View Profile
January 06, 2014, 01:25:46 AM
 #156

Would you be including the prefixes of the merkle tree nodes too? These would serve as a checksum, so the receiver would at least know which transaction has the colliding prefix. In fact, with the tree nodes included, you could be much less conservative with the prefix length, sending 2 or event 1 byte(s).

Collisions could also be handled by having a re-send system.

Hashes were grouped together into 32 hash groups and the merkle root for those hashes.

The sender could estimate how much of a prefix is required.  If 4 bytes was required, then a group of 32 hashes would require (32 * 4 + 32) bytes = 5 bytes per hash.

The receiver could replace the 32 hashes with its best guess and then check the CRC/Merkle root.  If it doesn't match, then it could ask for those hashes in expanded form.

Assuming a block with 4000 transactions in a block.

Peer 1: Sends block with 5 bytes per hash.

This works out at 20kB

Peer 2: Checks block and finds 2 groups with a mismatch

Peer 2: sends tx request

byte[32]: Block hash
varInt: group count
int: index 1
int: index 2

This works out as 41 bytes

Peer 1: Sends full transaction hashes for those blocks

This works out at 64 * 32 = 2048 bytes

This is much less than 4000 * 32 = 128kB.  However, it is at the cost of an additional round trip delay.

If each compressed hash was prefixed by a length, then the sender could try to estimate which hashes require longer prefixes.

The birthday paradox may cause potential issues here.

1LxbG5cKXzTwZg9mjL3gaRE835uNQEteWF
avivz78 (OP)
Newbie
*
Offline Offline

Activity: 21
Merit: 0


View Profile WWW
January 06, 2014, 06:53:53 AM
 #157

Could someone explain to me how the nodes should select the heaviest subtree, when they don't have that information? It requires information from all(!) other nodes. How can this even make sense, because one node is connected to say 8 peers and not the entire network?

Block headers could be broadcast for orphaned blocks.  However, it is possible that 2 nodes will have different headers accumulated.

With a linear blockchain, proof of work is always exactly agreed by all miners.

Since all block headers are linked.  All nodes can agree on the proof of work linked to a particular header.  If a node was missing any blocks on that chain, then it wouldn't be able to form the chain at all.

With GHOST, 2 nodes could disagree about the proof of work for a particular chain.  There is no way to be sure that you have all the orphans.

Very simple:

Our proposal includes keeping the hashes of off-chain blocks in each block (new ones not included in any of the ancestors of the block). The chain, when read sequentially therefore contains all off-chain block hashes, and thus any node can know the entire tree (if a node is missing some block it simply requests it or its header).
coinrevo
Member
**
Offline Offline

Activity: 70
Merit: 10


View Profile
January 06, 2014, 08:20:15 AM
 #158

Response to my critique of GHOST.

Quote
Huh? GHOST isn't different from Bitcoin in this regard.

No, not at all. The essence of bitcoin is: each node works on PoW independently. That's also why there are discrete blocks. As you said bitcoin ignores other nodes PoW - but why? The reason is that once a node requires information from other nodes (during working one block by default) you get an unstable network, because it takes time to transmit information. For example a node could run his own implementation and try to find nodes that do more PoW and disregard other weaker nodes. As soon as you introduce as an input the work of others, one node has to know about all(!) other nodes, so that the calculations are fair. One could argue it is a random distribution, but that assumption clearly doesn't hold. Ideally all peers are equal, but hashing power is not even. The bitcoin algorithm would be much better if it could somehow force nodes to be of equal hashing power. That's whats meant by the "proof-of-work is essentially one-CPU-one-vote" comment in the paper.

To be more clear, by way of practical example. If bitcoin would use a ghost like rule today.. Most mining now is done in certain hot spots, such as the big mine of Iceland [1]. Well, now you have the effect that any other mining is more profitable in Iceland, and miners should move to Iceland because delays are shorter and the heaviest subtree is located there. Simply by being located there you are nearest to the heaviest subtree (assuming that this miner is the biggest). That would introduce the effect that mining would quickly would be located in one or two places, when in fact we want the exact opposite. I suppose the reason that this entirely non-obvious is that people intuitively disregard the latency of speed of light. The other problem is how one looks at robustness of statistics. Traditional mathematics is very bad at understanding robustness, especially in economics. The Ghost paper claims to "prove" attacks are not possible. Note that the original bitcoin paper doesn't do that.

Why is the block time chosen to be 10 minutes? The reason is that it takes time to transmit information through a network. One has to really take into account that TCP/IP packets are not point to point connections. So the GHOST paper uses averages of delay times, but doesn't consider variance. The long 10 minutes is a safeguard against such geo-distribution effects. For example in theory people with bad interconnections can mine as well. The 0.5MB/sec throughput mentioned in the paper is not even achieved in developed countries, and so this leads to some biases the original design very thoughtfully avoids (which is why it is so stable).

[1]
http://dealbook.nytimes.com/2013/12/23/morning-agenda-the-bitcoin-mines-of-iceland/?_r=0
iddo
Sr. Member
****
Offline Offline

Activity: 360
Merit: 251


View Profile
January 06, 2014, 08:57:21 AM
 #159

Most mining now is done in certain hot spots, such as the big mine of Iceland [1]. Well, now you have the effect that any other mining is more profitable in Iceland, and miners should move to Iceland because delays are shorter and the heaviest subtree is located there. Simply by being located there you are nearest to the heaviest subtree (assuming that this miner is the biggest). That would introduce the effect that mining would quickly would be located in one or two places, when in fact we want the exact opposite.

Yes, if some miner nodes have better network connectivity to large portions of the total hashpower, then a shorter blocktime will benefit them more. The analysis in the GHOST paper doesn't take into account scenarios in which network connectivity among the miner nodes is unequal.
coinrevo
Member
**
Offline Offline

Activity: 70
Merit: 10


View Profile
January 06, 2014, 09:42:21 AM
 #160

It would quickly lead to effect like I described above, which would break the system. Soon all mining nodes would be at one place and all other nodes have no incentive to participate or believe the information is correct. So its more than just connectivity. At least the hashing power is distributed in the sense of geolocation, as delays are not an issue. I believe a careful analysis would show that 10 minutes is a well chosen factor, depending on variance of propagation in relation to proof of work time. I'm not sure if somebody has done that analysis, but I think it's more than likely satoshi thought about this in these terms. The 10 minute treshold takes care of all eventualities.
TierNolan
Legendary
*
Offline Offline

Activity: 1232
Merit: 1077


View Profile
January 06, 2014, 02:04:42 PM
 #161

Our proposal includes keeping the hashes of off-chain blocks in each block (new ones not included in any of the ancestors of the block). The chain, when read sequentially therefore contains all off-chain block hashes, and thus any node can know the entire tree (if a node is missing some block it simply requests it or its header).

Interesting.

That eliminates the need for each sub-tree to be evaluated.  The weight for each block can be directly calculated purely from within the block.

You would need to embed a sub-header into the coinbase (or OP_RETURN tx).

Each node would have to remember all headers from orphaned blocks that are referenced.  A block would be invalid if it re-references a header.

1LxbG5cKXzTwZg9mjL3gaRE835uNQEteWF
asdf
Hero Member
*****
Offline Offline

Activity: 527
Merit: 500


View Profile
January 07, 2014, 10:29:38 AM
Last edit: January 07, 2014, 11:35:15 AM by asdf
 #162

Would you be including the prefixes of the merkle tree nodes too? These would serve as a checksum, so the receiver would at least know which transaction has the colliding prefix. In fact, with the tree nodes included, you could be much less conservative with the prefix length, sending 2 or event 1 byte(s).

Collisions could also be handled by having a re-send system.

Hashes were grouped together into 32 hash groups and the merkle root for those hashes.

The sender could estimate how much of a prefix is required.  If 4 bytes was required, then a group of 32 hashes would require (32 * 4 + 32) bytes = 5 bytes per hash.

The receiver could replace the 32 hashes with its best guess and then check the CRC/Merkle root.  If it doesn't match, then it could ask for those hashes in expanded form.

Assuming a block with 4000 transactions in a block.

Peer 1: Sends block with 5 bytes per hash.

This works out at 20kB

Peer 2: Checks block and finds 2 groups with a mismatch

Peer 2: sends tx request

byte[32]: Block hash
varInt: group count
int: index 1
int: index 2

This works out as 41 bytes

Peer 1: Sends full transaction hashes for those blocks

This works out at 64 * 32 = 2048 bytes

This is much less than 4000 * 32 = 128kB.  However, it is at the cost of an additional round trip delay.

If each compressed hash was prefixed by a length, then the sender could try to estimate which hashes require longer prefixes.

The birthday paradox may cause potential issues here.

This "groups" idea is essentially the same thing as the merkle tree, if you make the group size 2. With smaller group size, you have to resend less transaction hashes in the case of collision. Also, the birthday paradox is less of an issue; you could use smaller prefixes.

Also, consider that we can compress the merkle nodes. Collisions in the nodes can be checked with the parent nodes all the way up the tree, but it becomes exponentially less likely that you'll need to resend a branch as you move up the tree. The advantage of this hierarchical checksum is that you can make the prefixes, of the nodes and the transaction hashes, much smaller.

I think that re-requesting any transactions is too costly, so we should work out the prefix size so that there are "probably" no collisions, but only just. Someone needs to do the math.
TierNolan
Legendary
*
Offline Offline

Activity: 1232
Merit: 1077


View Profile
January 07, 2014, 01:35:46 PM
 #163

I think that re-requesting any transactions is too costly, so we should work out the prefix size so that there are "probably" no collisions, but only just. Someone needs to do the math.

Right.  The prefix length depends on the the size of the memory pool and the number of transactions in a block.

Assume there are 100k entries in the memory pool and 4k transactions per block.

If you have 4 bytes per transactions, then the odds of a transaction of a transaction matching one of the transactions in the memory pool is (100k * (1/pow(2, 32)) = 1 in 43,000.

If there are 4000 transactions in the block, then at least one of them will collide every 43000/4000 = 10.75 blocks, which is probably acceptable.

However, there is no clear definition of the size of the memory pool.  When a peer requests a block, it could specify how many bytes per transaction.  The reply would use at least that many bytes per entry.

1LxbG5cKXzTwZg9mjL3gaRE835uNQEteWF
coinzcoinzcoinz
Hero Member
*****
Offline Offline

Activity: 530
Merit: 500


View Profile
November 10, 2014, 04:23:16 PM
 #164

Darkcoin solved this
https://www.youtube.com/watch?v=zBjUPj-TmFE
DumbFruit
Sr. Member
****
Offline Offline

Activity: 433
Merit: 254


View Profile
November 10, 2014, 07:54:36 PM
Last edit: November 10, 2014, 08:23:42 PM by DumbFruit
 #165

Is there a better description about what it solved exactly? Instant worldwide consensus is impossible. Running two nodes on the same computer and getting fast confirmation times by jacking up the block times gets a person a first class ticket to Facepalm Town.

Update: Alright, this is what he's talking about;
https://www.darkcoin.io/about/what-is-darkcoin/instant-transaction-technology/

So Darkcoin uses the idea of "locking" a transaction by "masternodes", and the masternodes can somehow give a user some kind of confidence that the transaction isn't being double spent.

The exact same problems that you have with consensus otherwise would exist with "locks", unless I'm missing something.

Under 4.3 I would like to see a better defense of "This would happen very quickly, in the matter of a few seconds in most cases."
Is this because of highly centralized "Masternodes"?

By their (dumb) fruits shall ye know them indeed...
instagibbs
Member
**
Offline Offline

Activity: 114
Merit: 12


View Profile
November 10, 2014, 10:58:20 PM
 #166


A fairly broken, over-complicated, brittle version of 2-of-2 signing.

Why not just use GA.it or something similar? Same benefits(required some trust), with no crazy systematic risks.

(I know why but it's rhetorical. Any decentralized system that claims secure 0-conf without trust is lying.)
ZcOoe0wRpk
Newbie
*
Offline Offline

Activity: 2
Merit: 0


View Profile
May 30, 2023, 01:48:07 AM
 #167

i will be carefully monitoring this technology; if it proves to be successful on paper and test runs, and bitcoin developers fail to implement this, i'll be investing in alt-coin which uses it.
In the end, this proposal did not receive recognition from bit development. He now has an alternative version called Kaspa, which implements the extension of the BTC consensus. It is Kaspa, created by avivz78 and his student Yonatan Sompolinsky. A single block of 1s will quickly achieve 32bps, complete decentralization, and pow coins.
Flexystar
Full Member
***
Offline Offline

Activity: 1092
Merit: 227



View Profile
June 15, 2023, 01:48:50 PM
 #168

I am seeing the paper in layman’s view since I’m not that too technical. However I have been involved in one thread in regard to understand why the block size has been kept limited to 1MB or 3MB or something else. Now here in your paper I read that you can actually modify the block size and include higher number of transactions. Before this paper my understanding was the block was limited in the size as well as it can never be touched since the time Satoshi released it.

But is this happening any soon or does this paper suggest the scalability issue and how it can be resolved “if” this is implied.
Pages: 1 2 3 4 5 6 7 8 9 [All]
  Print  
 
Jump to:  

Powered by MySQL Powered by PHP Powered by SMF 1.1.19 | SMF © 2006-2009, Simple Machines Valid XHTML 1.0! Valid CSS!