mechanikalk (OP)
|
BlockReduce presents a new blockchain topology that offers 3+ orders of magnitude improvement in transaction throughput while avoiding the introduction of hierarchical power structures and centralization. This is accomplished through a modification to the block reward incentive to not only reward work, but to also reward optimization of network constraints and efficient propagation of transactions. This is accomplished by creating a Proof-of-Work (PoW) managed hierarchy of tightly-coupled, merge-mined blockchains. Please take a look at the paper: https://arxiv.org/pdf/1811.00125.pdfHere is a video presentation of BlockReduce presented at the McCombs Bitcoin Conference. Also, here is a BIP draft to review and contribute to on Github. Any comments or questions are greatly appreciated!
|
|
|
|
mechanikalk (OP)
|
ETFBitcoin, thank you for taking a first look! I look forward to hearing your further thoughts.
1. The sub-groups are essentially children blockchain. They have the same characteristics and issues in terms of transaction propagation as BTC. However, because the groups are smaller and economically incentivized to form into low latency groups, the bandwidth constrained TPS will likely be much higher in BlockReduce.
2. I don't think that the attack vector is ultimately any different than Bitcoin because the transactions are all ultimately propagated, validated, and recorded using PoW. They just do so incrementally.
3. This is a good point. If you look in the paper one of the references is to a study published by BitFury looking at the constraints to scaling. The first constraint is bandwidth, the next is RAM, and the last is persistant data storage. However, BlockReduce actually enables meaningfully different types of nodes which allows nodes to decide on the level of resource they want to commit based on the economic use case. For example, a "zone node" would only need to keep state of their zone as well as a trimmed region state and trimmed PRIME state. This would mean that although the aggregate blockchain is running at tens of thousands of TPS, the "zone node" would be using similar amount of resources as a Bitcoin node uses today. If a large miner wants to participate, they will dedicate more computation resources (RAM, storage, et cetera) because they will have an economic incentive to validate more transactions in more zones. This will allow them to quickly discover if a fork exists in a zone or region preventing them from wasting hash power. Also, it will provide them a arbitrage opportunity by increasing their ROI by directing hash into a zone when a fork takes place.
|
|
|
|
mechanikalk (OP)
|
1. Since there are many region and zone, do you think each region/zone would have active nodes, miners and users? IMO with current Bitcoin state, i'm sure many region/zone would have empty block (no transaction). The idea of BlockReduce is that the protocol would be able to adjust the block size and the number of region and zones such that the system operates near capacity say 80% fill. This will ensure that transaction fees are non-zero, but also that there is economic incentive to balance transaction demand evenly amongst the zones. This also will help to create transaction revenue to incentivize miners to continue to secure the chain when inflation block rewards move towards zero. 2. In 4.4, it mentions that region and zone have lower mining difficulty. If so, what stopping attacker from 51% hash rate attack to prevent transaction from confirmed or double-spend transaction? I've read 4.11, but IMO it's not good enough since : Attacker might pretend as honest 40%/miner with minority hash rate Human intervention is required (move to different zone) The zone is only a mechanism in which incremental work ensures valid transaction processing without necessarily requiring other nodes validate all transactions. This does not mean that other nodes don't need to verify the transactions. If they have enough hash power they will be economically incentivized to validate a greater number of transactions from further removed zones to ensure they never lose work. Additionally, even light nodes could validate transactions of adjacent zones without necessarily having to keep canonical state. Large miners will keep the entirety of state as well as verify all transactions just because they do not want to lose hash working on bad blocks. Even if a bad block is added in a zone, it will eventually be found and thrown out as it propagates up into regions and zones. Ideally, it will be found out quickly in a zone, but if it is not, eventually all hash power in the network will work on it, which means that to get included persistently in PRIME it would be a 51% attack of all network hash. 3. In 4.5, it mentions "If multiple inputs are used in a transaction, they must reside within the same scope, meaning inputs must be taken from the same zone". Is it possible to treat it only as region/PRIME transaction scope? Originally, I thought that this could be done. However, I don't think it is a good idea because if a UTXO had PRIME scope, a user could initiate conflicting transactions in many zones. This would take some time to be discovered and would cause significant waste of hash power. If the PRIME transaction was sufficiently expensive to make this type of SPAM attack unreasonably expensive, it could be done. 4. Since both nodes with PRIME and region scope require more resources. IMO, centralization or control will happen since there's less group or nodes that needs to be attacked/hijacked. There could potentially be some amount of centralization, but the fact that nodes can have a continuum of different resource requirements is decentralizing. Additionally, the fact that overlap of verification and processing is not prescriptive, but rather variable based on a nodes economic incentives, it will create a diverse overlapping set of miners. 5. With region and zone scope, this will make bitcoin less pseudonymous since this would make transaction tracking analysis far easier There will be some decrease in anonymity, however this is likely small and can be managed by the user if they desire better privacy. If a user is in a set of 1/4 of bitcoin transactions rather than all bitcoin transactions there is less anonymity. That user could choose to operate throughout all zones which would cost slightly more, but would provide the same amount of anonymity as bitcoin. Your idea is interesting, but i seriously doubt Bitcoin community will accept idea since : 1. Increase bitcoin development complexity a lot This is actually an incredibly simple change to the bitcoin code base. It only requires a change in the block header, transaction header, peer management, and the chain database.
|
|
|
|
mechanikalk (OP)
|
|
November 16, 2018, 12:28:31 AM |
|
I have developed this idea into a BIP which is available for people to view here. https://github.com/mechanikalk/bips/blob/master/bip-%3F%3F%3F%3F.mediawikiWould really appreciate any additional feedback or discussion. The TL;DR is that it is many Bitcoin blockchains that are merge mined at different difficulties with different sets of state. This prevents sharding of PoW without causing sharding of state. Would enable a Bitcoin like blockchain to scale to 10,000s of TPS without any centralization. Thanks!
|
|
|
|
boogersguy
Newbie
Offline
Activity: 21
Merit: 2
|
|
November 19, 2018, 11:12:03 AM |
|
Are there or are there not legal and tax implications with respect to explicitly selecting a geographic location from which your transaction originates?
|
|
|
|
spartacusrex
|
|
November 19, 2018, 12:17:11 PM |
|
OP! Congratulations on getting to this stage There is a lot of work here. Wonderful stuff. A main chain that acts as an anchor for all the other chains running off it. Merged Mining Meta-Chain or something. (I have a niggly feeling that somewhere deep in the bowels of bitcointalk something like this was mentioned / discussed - no idea though ) Diving straight in - Merge mining means that the hash power is spread over all the chains that miner chooses to mine. All the chosen chains benefit. This is good. The only chain that everyone mines in your system is the PRIME chain. The lower levels are mined by less of the miners the lower you go. So although the hash power is kept high on PRIME - what gives the lower level chains the POW security required ? Adding them in as a root hash in the prime block (or whatever merkle path binds it) depends on lower level regions agreeing via POW what goes up to the higher levels. So - would that not be the attack vector ? Please correct my understanding if this is incorrect!
|
Life is Code.
|
|
|
mechanikalk (OP)
|
|
November 19, 2018, 03:06:03 PM |
|
Are there or are there not legal and tax implications with respect to explicitly selecting a geographic location from which your transaction originates?
The regions and zones will are not prescribed by BlockReduce, but rather incentivized. Therefore, there will be influences of economic groups, geographic groups, and network topologies that play into "where" a region or zone is "located". However, none of the regions or zones will be perfectly monolithic. They will be overlapping and intertwined with little respect for geographic jurisdictional boundaries. Furthermore, because there will always be at least one zone node outside of a given jurisdiction, there will be no ability for anyone to claim geographic location of a transaction.
|
|
|
|
aliashraf
Legendary
Offline
Activity: 1456
Merit: 1175
Always remember the cause!
|
|
November 19, 2018, 05:44:32 PM Merited by Welsh (2), ABCbits (1) |
|
I think there are a few contradictions and shortcomings in the model you have proposed in the github document: Let's begin with your following argument Now that the transactions are built up into larger groups of data, say hundreds or thousands of transactions, nodes can first check if they need to transmit data to their peers by checking a hash. This enables a significant increase in bandwidth efficiency. Transmitting a hash would only take tens of bytes, but would allow the nodes to determine if they need to transmit several kilobytes of transaction data. {hence} Most importantly, the amount of bandwidth used to create 1,000 TPS blocks is significantly smaller than that of current blockchains
It is based on a false understanding of how bitcoin networking layer works. Actually, it is a while that there is no unnecessary transmission of raw transaction data in bitcoin. If a node is already aware of a transaction, it will never ask for it to be re-transmitted and no peer would 'push' it again. Nodes just check txids included in the blocks by querying the Merkle path and if they ever find an unknown txid they can submit an inv message and ask for the raw data. So, In your so-called Prime chain, we have no communication efficiency compared to bitcoin as long as nodes present on this chain have to validate the transactions they are committing to. But nodes do have to validate the transactions, don't they? Otherwise how they could ever be able to commit to their sub-trees (regions/zones)? I think there is a big misunderstanding for you about blockchains, we don't commit to the hash of anything (a block, a transaction, ...) unless we have examined and validated it and we couldn't validate anything without having access to its raw data. It leads us to another serious issue: state segmentation. Officially you describe the proposal being a combination of state sharding with other techniques: BlockReduce combines the ideas of incremental work and sharding of state with merge-mining to form a tightly coupled PoW-managed hierarchy of blockchains which satisfies all of the proposed scaling requirements. Well, it is not completely right, I afraid. For prime chain being legitimate for committing to region blocks (and so on) they not only need ultimate access to all transaction data of the whole network but they need to keep the whole state preserved and uptodate. There could be no sharding of state. There is more to discuss but for now I suppose we have enough material to work on. P.S. Please note that I'm a fan and I'm not denouncing your proposal as a whole. I'm now thinking of hierarchical sharding as a promising field of investigation, thanks to your work, well done.
|
|
|
|
mechanikalk (OP)
|
|
November 19, 2018, 07:05:59 PM Last edit: November 20, 2018, 12:17:55 PM by mechanikalk |
|
OP! Congratulations on getting to this stage There is a lot of work here. Wonderful stuff. A main chain that acts as an anchor for all the other chains running off it. Merged Mining Meta-Chain or something. (I have a niggly feeling that somewhere deep in the bowels of bitcointalk something like this was mentioned / discussed - no idea though ) Diving straight in - Merge mining means that the hash power is spread over all the chains that miner chooses to mine. All the chosen chains benefit. This is good. The only chain that everyone mines in your system is the PRIME chain. The lower levels are mined by less of the miners the lower you go. So although the hash power is kept high on PRIME - what gives the lower level chains the POW security required ? Adding them in as a root hash in the prime block (or whatever merkle path binds it) depends on lower level regions agreeing via POW what goes up to the higher levels. So - would that not be the attack vector ? Please correct my understanding if this is incorrect! Great question and I appreciate the feedback. The large miners that are mining PRIME will have an economic incentive to validate the blocks that are being passed up. If a block is detected because it is nefarious or there is a casual fork in a lower chain, a miner that is keeping state over multiple child chains will be incentivized to figure out the valid block and redirect hashpower toward the correct fork in both the lower chain as well as only including the correct head hash in the blocks they are mining in the parent chains. The lower fork could propagate as a fork in a region and eventually a fork in PRIME which means that all hashpower is used to determine consensus. The PRIME miners are effectively notarizing the results of the lower chain. Ideally, based on the slower block times of PRIME relative to region, and region relative to zones, most forks should be fixed before propagating one or more levels up. If they do, the hash trees will provide an efficient mechanism for re-orging large amounts of data in PRIME. Because all miners eventually need to do work on the region and zone hashes in PRIME and including a hash of an invalid block from a region or zone chain will invalidate a PRIME block they are incentivized to validate transactions. This effectively means that all hashpower is validating what is going into PRIME. It will be optional for the nodes that are validating transactions outside of the zone in which the are mining how much state they want to maintain. If a node with little hashpower is mining in a zone and doesn't want to validate adjacent transactions, they could trust the incoming transactions to some economic point because incremental work has been performed by mining an adjacent zone block. If a miner has more hash power, they will be incentivized to do more checking of more transactions and keep a greater amount of state. In the extreme example they could keep all state of all things and this could be thought of as a PRIME node. They would still only be able to direct hashpower uniquely to each zone, however, by keeping state in more places they will create an insurance policy against their hashpower being wasted in that bad blocks are found quickly. They will also have an economic incentive to redirect hash power to adjudicate a fork. There are many node types that could be had in this architecture. I could have a zone node, a region node, and a PRIME node. The zone node would only keep state of a zone, a region, and PRIME. A region node would keep state of a region, all zones in that region and PRIME. A PRIME node would keep state of all things. Even within the different node types there are different levels of validation that could be performed. For example, a zone node could choose to validate and keep state of all zone peers. They could also decide to keep some form of limited or truncated state. For example, if the UTXO pool in each zone is passed into PRIME via the merkle interlink field, a zone would be able to trust zone state at some point of depth in PRIME. This would enable them to only need to keep a limited block depth of the adjacent zone verifications with only needing to maintain several hundred or a thousand blocks of the adjacent zones without introducing a trust outside of PRIME consensus. The same logic could be used at the region level. Even if all nodes are PRIME nodes and keep full state, the amount of permanent storage needed would only be 8 GbTb per year for 1000 TPS which currently can be purchased for a couple hundred dollars.
|
|
|
|
aliashraf
Legendary
Offline
Activity: 1456
Merit: 1175
Always remember the cause!
|
|
November 20, 2018, 06:48:20 AM |
|
The zone node would only keep state of a zone, a region, and PRIME. A region node would keep state of a region, all zones in that region and PRIME. A PRIME node would keep state of all things.
This is an example of what I criticized in my above post about your misinterpretation of state sharding: In cryptocurrency and blockchain literature, specially in bitcoin literature, by state we are referring to UTXO (the set of unspent transaction outputs). Plus, keeping, in this context, implies nothing less than commitment and as I've discussed in my previous post, you need to fully validate what you are committing to. For your so-called 'zone nodes' to 'keep the state' of its ancestors, they need to validate the events (transactions) and state transitions (blocks) and it won't happen without spending the same amount of resources required for being a full region/PRIME node. It makes your classification of nodes and the whole 'node types' concept pointless. There is only one node type in your proposed architecture as far as we are talking about full/self-contained nodes and not bitcoin spv like nodes that are not secure and have little if not zero role in keeping the network secure. On the other hand as upper level nodes are allowed to include transactions in their blocks, zone nodes have to keep track of their states and as @spartacusrex has correctly mentioned above, in your proposal, all nodes are PRIME. Actually, you have gone that far to admit it, well, somehow: Even if all nodes are PRIME nodes and keep full state, the amount of permanent storage needed would only be 8Gb per year for 1000 TPS which currently can be purchased for a couple hundred dollars.
It is 8TB obviously and for 1000 TPS you need more than just storage: RAM, processing power and bandwidth among them and for the latter I have already refuted your efficiency argument.
|
|
|
|
mechanikalk (OP)
|
|
November 20, 2018, 12:08:10 PM |
|
I think there are a few contradictions and shortcomings in the model you have proposed in the github document: Let's begin with your following argument Now that the transactions are built up into larger groups of data, say hundreds or thousands of transactions, nodes can first check if they need to transmit data to their peers by checking a hash. This enables a significant increase in bandwidth efficiency. Transmitting a hash would only take tens of bytes, but would allow the nodes to determine if they need to transmit several kilobytes of transaction data. {hence} Most importantly, the amount of bandwidth used to create 1,000 TPS blocks is significantly smaller than that of current blockchains
It is based on a false understanding of how bitcoin networking layer works. Actually, it is a while that there is no unnecessary transmission of raw transaction data in bitcoin. If a node is already aware of a transaction, it will never ask for it to be re-transmitted and no peer would 'push' it again. Nodes just check txids included in the blocks by querying the Merkle path and if they ever find an unknown txid they can submit an inv message and ask for the raw data. So, In your so-called Prime chain, we have no communication efficiency compared to bitcoin as long as nodes present on this chain have to validate the transactions they are committing to. But nodes do have to validate the transactions, don't they? Otherwise how they could ever be able to commit to their sub-trees (regions/zones)? I think there is a big misunderstanding for you about blockchains, we don't commit to the hash of anything (a block, a transaction, ...) unless we have examined and validated it and we couldn't validate anything without having access to its raw data. It leads us to another serious issue: state segmentation. Officially you describe the proposal being a combination of state sharding with other techniques: BlockReduce combines the ideas of incremental work and sharding of state with merge-mining to form a tightly coupled PoW-managed hierarchy of blockchains which satisfies all of the proposed scaling requirements. Well, it is not completely right, I afraid. For prime chain being legitimate for committing to region blocks (and so on) they not only need ultimate access to all transaction data of the whole network but they need to keep the whole state preserved and uptodate. There could be no sharding of state. There is more to discuss but for now I suppose we have enough material to work on. P.S. Please note that I'm a fan and I'm not denouncing your proposal as a whole. I'm now thinking of hierarchical sharding as a promising field of investigation, thanks to your work, well done. Thank you for your comments and feedback. On your first point, I am aware that only hashes of transactions are originally transmitted before all transaction data. There was lack of precision in how I described transaction propagation in the Github BIP and I have updated it accordingly. However, it remains true that a significant volume of traffic is used passing the transaction hashes around. If you look at this research paper by Bitfury from a few years ago, although slightly out of date, gives a good idea of bandwidth efficiency. One time transmission of data in a peer-to-peer network is highly efficient. The inefficiencies measured have to do with creating a consistent set of transactions. Therefore, if I can incrementally create this set and represent it by a hash as I share it, I will decrease my bandwidth usage significantly because 1 hash will represent 100 or 1000 hashes. In terms of sharding state. For simplicity of the initial discussions, lets just assume all nodes keep all state and validate all transactions. BlockReduce would reduce the amount of bandwidth needed to propagate a higher number of TPS efficiently. It would increase the need for RAM proportional to the UTXO set, permanent data storage proportional to chain size, and CPU power proportional to TPS. However, none of these items are currently the limiting factor for scaling. The present limiting factor for scaling is bandwidth and addressing this item would allow a significant increase in TPS. In terms of the other limiting factors, these can be addressed with other mechanisms, but again, lets just discuss the proposal initially in terms of all nodes keeping and validating all state. Also, for reference I presented this at McCombs Blockchain Conference. The video gives a quick overview of BlockReduce if anyone is interested in getting a summary that is easier than reading the paper.
|
|
|
|
aliashraf
Legendary
Offline
Activity: 1456
Merit: 1175
Always remember the cause!
|
|
November 20, 2018, 02:13:02 PM |
|
Thank you for your comments and feedback. On your first point, I am aware that only hashes of transactions are originally transmitted before all transaction data. There was lack of precision in how I described transaction propagation in the Github BIP and I have updated it accordingly. However, it remains true that a significant volume of traffic is used passing the transaction hashes around. If you look at this research paper by Bitfury from a few years ago, although slightly out of date, gives a good idea of bandwidth efficiency. One time transmission of data in a peer-to-peer network is highly efficient. The inefficiencies measured have to do with creating a consistent set of transactions. Therefore, if I can incrementally create this set and represent it by a hash as I share it, I will decrease my bandwidth usage significantly because 1 hash will represent 100 or 1000 hashes. The research document from Bitfury you've referenced is out of date as you've mentioned, bitcoin has improved a lot since 2016 and BIP 152 (compact blocks). As of your concerns about " significant volume of traffic is used passing the transaction hashes around" and your argument about improving it by topological improvement that you are proposing, besides ambiguities and threats involving the way this topology is to be dictated, I doubt it to be much helpful: 1- For non-mining full nodes/wallets that are actively participating in block/transaction relay protocol, nothing is changed and they will do their usual business of connecting and relaying hashes and the raw data if it is requested by peers. 2- For mining nodes, although showing little interest in transactions that belong to shards they are not currently mining on might look somewhat reasonable, it is not! They need to be aware of other transactions and their fees to make right decisions about the next shard they are willing to mine. 3-For spv wallets the business is as usual too. 4- For all full nodes, having transactions in mempool is a privilege as it helps to catch up earlier with new-block-found events (as they wouldn't need to query and validate transactions immediately) So, I don't find a disruptive improvement here. Once again, I need to express my deep respect for your work as I found it very inspirative and useful for my own line of research and protocol design, but I think it needs a lot of serious improvements. cheers
|
|
|
|
spartacusrex
|
|
November 20, 2018, 02:39:47 PM |
|
@Aliashraf makes good points about the need for miners to validate all the data. Sharding has yet to fix this issue. But, if we can brainstorm a bit, this idea may bear fruit I like the idea of a central POW chain that everyone mines. And that off this central chain (the PRIME) you can run multiple side chains (a simplification of your hierarchical structure). The real issue for me is that POW side chains are insecure.. unless you can get a lot of miners to mine it. Check pointing the side chain blocks in the PRIME POW chain simply timestamps the insecure blocks securely ;p So can we make the side chains more secure..well yes you can. You can mine them POS. POS doesn't work 'publicly' but in a federated environment it works much better. In an environment where you don't mind that certain people run the chain. Still better, by time-stamping the blocks in the PRIME POW chain, you actually remove the nothing at stake problem. Because now there is something. So 'getting on the correct chain' (the base POS problem - as you can copy a chain's security for free) is easy. And Boom. I've just described Bitcoin + Liquid.. But can we do the same with POW.. ? I think POW altcoins are almost exactly this. They take traffic off the PRIME chain, PRIME miners don't need to validate them, they just don't run as _official_ side chains. The trick here is that the POW algorithms need to be different. The mining community large. Then you could run these as POW side chains. Is there a way that every_chain can benefit from ALL the shared POW equally across all the different chains whilst only validating shards of the data ? Not yet.. me thinks. But THAT would be cool.
|
Life is Code.
|
|
|
mechanikalk (OP)
|
|
November 20, 2018, 03:08:02 PM Last edit: November 20, 2018, 04:50:20 PM by mechanikalk |
|
As of your concerns about "significant volume of traffic is used passing the transaction hashes around" and your argument about improving it by topological improvement that you are proposing, besides ambiguities and threats involving the way this topology is to be dictated, I doubt it to be much helpful:
Lets work through an example. Lets use the following assumption, that a node has 100 peers, a transaction hash is 32 bytes, and a transaction is 250 bytes. A transaction is broadcast to our nodes' peer. Our node sees this, request the transaction data, and begins to broadcast the transaction hash to our peers. In the meantime 49 of our nodes peers also broadcast the same transaction hash to our node. This means that our node sends 50*32 bytes plus maybe 2*250 bytes and receives 50*32 bytes and 250 bytes. Just making an assumption on the number of times we transmit the transaction data. Ideally for all nodes this would be >1 so 2 is pretty conservative. Summary: 1 transaction - Tx: 2100 bytes Rx: 1850 bytes If transactions are shared with a peer as a zone block that has 100 transactions in a block. I will share a 32 byte hash of the block the same number of times and transmit the full block of data say the same number of times. So our node sends 50*32 plus 2*250*100 and receives 50*32 plus 250*100. Summary: 100 transactions - Tx: 51600 bytes Rx: 26600 bytes If I normalize this I get 516 Tx bytes/transaction and 266 Rx bytes/transaction. This represents 75% greater efficiency in transmit bandwidth usage and 90% greater efficiency in receive bandwidth usage. This gets even better when aggregating into PRIME. In the smallest 2x2 hierarchy region blocks would have roughly 1000 transactions per block. This means that at the PRIME level I would be achieving >98% reduction in bandwidth by more efficiently communicating transactions by aggregating them into region blocks before transmitting to peers.
|
|
|
|
mechanikalk (OP)
|
|
November 20, 2018, 03:22:02 PM Last edit: November 20, 2018, 03:32:45 PM by mechanikalk |
|
I like the idea of a central POW chain that everyone mines. And that off this central chain (the PRIME) you can run multiple side chains (a simplification of your hierarchical structure). The real issue for me is that POW side chains are insecure.. unless you can get a lot of miners to mine it. Check pointing the side chain blocks in the PRIME POW chain simply timestamps the insecure blocks securely ;p
The reason that the side chains are just as secure as the PRIME chain is that the PRIME chain is still checking all transactions explicitly (if not implicitly). Lets use a simple example with a PRIME chain and two children chains. Each child chain has 50% of the hash power of PRIME. Miners in both children (A and B) are validating transactions that they receive via the children blocks and including the transactions and block hashes in PRIME. Therefore, by validating the transactions before including the hash in PRIME, 100% of the hash power is checking the transaction. Therefore, the transactions are just as secure if there where no children chains. Lets for example imagine a fork in A (malicious or otherwise). The miners in B will have to decide which hashes in A to include when working on PRIME. If they include the "wrong/invalid" hash they will be wasting work. Therefore, everyone is still explicitly voting on every transaction with 100% of the hash power. Is there a way that every_chain can benefit from ALL the shared POW equally across all the different chains whilst only validating shards of the data ? Not yet.. me thinks. But THAT would be cool.
I think that the above is they way that every chain benefits from ALL the PoW while explicitly validating every transactions. If we want to complicate the discussion, if I am a small miner in PRIME and working in A and I don't want to validate every transaction in B, I could choose to "trust" the block given to me. Why is this trust but not trust? If I take it on faith I may waste work by including that B block hash in PRIME if it is invalid. However, I am not taking it on faith because B is presenting to me half the work that goes into a PRIME block. Therefore, it would be very expensive for B to create a bad block that would ultimately be discovered and re-orged out of the chain. So, if I don't have the other computing resources to justify this validation, I can "bet" that the "bet" made by B is good. In the case it is not, a larger miner with greater economic incentive and resource will identify and not include the block hash in PRIME. This level of trust is significantly lower compared to the trust that anyone has that is using a mining pool, or a Bitcoin user that doesn't run a full node.
|
|
|
|
spartacusrex
|
|
November 20, 2018, 03:53:59 PM |
|
The reason that the side chains are just as secure as the PRIME chain is that the PRIME chain is still checking all transactions explicitly (if not implicitly). Lets use a simple example with a PRIME chain and two children chains. Each child chain has 50% of the hash power of PRIME. Miners in both children (A and B) are validating transactions that they receive via the children blocks and including the transactions and block hashes in PRIME. Therefore, by validating the transactions before including the hash in PRIME, 100% of the hash power is checking the transaction. Therefore, the transactions are just as secure if there where no children chains.
no. With your example. Prime chain + 2 side chains. The miners of each side chain mine just their sidechain and the PRIME chain. So yes the Prime chain has 100% hash, but the side chains have 50% each. This means that any transactions about to be included in the PRIME chain have been _secured_ by 50% of the hash in the side chain. Then you use 100% of the hash to secure that - in the PRIME chain. The weakest link in this scenario/chain is the 50% POW side chain. This is the security of the transactions in the side chain - not 100%. You have secured the 50% POW trxs in perpetuity with 100% of the POW..
|
Life is Code.
|
|
|
mechanikalk (OP)
|
|
November 20, 2018, 04:26:43 PM Last edit: November 20, 2018, 04:38:08 PM by mechanikalk |
|
The reason that the side chains are just as secure as the PRIME chain is that the PRIME chain is still checking all transactions explicitly (if not implicitly). Lets use a simple example with a PRIME chain and two children chains. Each child chain has 50% of the hash power of PRIME. Miners in both children (A and B) are validating transactions that they receive via the children blocks and including the transactions and block hashes in PRIME. Therefore, by validating the transactions before including the hash in PRIME, 100% of the hash power is checking the transaction. Therefore, the transactions are just as secure if there where no children chains.
no. With your example. Prime chain + 2 side chains. The miners of each side chain mine just their sidechain and the PRIME chain. So yes the Prime chain has 100% hash, but the side chains have 50% each. This means that any transactions about to be included in the PRIME chain have been _secured_ by 50% of the hash in the side chain. Then you use 100% of the hash to secure that - in the PRIME chain. The weakest link in this scenario/chain is the 50% POW side chain. This is the security of the transactions in the side chain - not 100%. You have secured the 50% POW trxs in perpetuity with 100% of the POW.. Since the miners are validating all transactions in PRIME and including them in the PRIME block, 100% of the hash power voted for transactions that originated in either A or B. Think about the child chains as not consensus, rather think about them as a mechanism for creating sets of transactions which can be efficiently propagated. This is what I call consistency. A consistent set of transactions on which the network validates and performs work on to include in a block. Once everyone, all miners have the transaction set, they check them and build a PRIME block. All transactions are included in PRIME in some form of a nested merkle tree. All transactions were validated by all miners, and voted on by all miners when included in PRIME. Therefore there is no sharding of work. Lets say for example A propogates a block with an invalid transaction. Miners in B will see this invalid transaction and refuse to include the hash in PRIME or to do any work on the transactions included in that block. By the rules of consensus, transactions are not actually valid until they have been included in PRIME. Again, don't even think of the children chains as blocks. Only think about them as a mechanism for aggregating transactions. The reason that this is different then merge mining is that the consensus rules are consistent and checked across the hierarchy of all chains. This is not true in the case of something like Bitcoin and Namecoin.
|
|
|
|
aliashraf
Legendary
Offline
Activity: 1456
Merit: 1175
Always remember the cause!
|
|
November 20, 2018, 05:43:31 PM |
|
As of your concerns about "significant volume of traffic is used passing the transaction hashes around" and your argument about improving it by topological improvement that you are proposing, besides ambiguities and threats involving the way this topology is to be dictated, I doubt it to be much helpful:
Lets work through an example. Lets use the following assumption, that a node has 100 peers, a transaction hash is 32 bytes, and a transaction is 250 bytes. A transaction is broadcast to our nodes' peer. Our node sees this, request the transaction data, and begins to broadcast the transaction hash to our peers. In the meantime 49 of our nodes peers also broadcast the same transaction hash to our node. This means that our node sends 50*32 bytes plus maybe 2*250 bytes and receives 50*32 bytes and 250 bytes. Just making an assumption on the number of times we transmit the transaction data. Ideally for all nodes this would be >1 so 2 is pretty conservative. Summary: 1 transaction - Tx: 2100 bytes Rx: 1850 bytes If transactions are shared with a peer as a zone block that has 100 transactions in a block. I will share a 32 byte hash of the block the same number of times and transmit the full block of data say the same number of times. So our node sends 50*32 plus 2*250*100 and receives 50*32 plus 250*100. Summary: 100 transactions - Tx: 51600 bytes Rx: 26600 bytes If I normalize this I get 516 Tx bytes/transaction and 266 Rx bytes/transaction. This represents 75% greater efficiency in transmit bandwidth usage and 90% greater efficiency in receive bandwidth usage. This gets even better when aggregating into PRIME. In the smallest 2x2 hierarchy region blocks would have roughly 1000 transactions per block. This means that at the PRIME level I would be achieving >98% reduction in bandwidth by more efficiently communicating transactions by aggregating them into region blocks before transmitting to peers. Sorry, but it is not correct. For transactions to be included in a block they have to be "whispered" in the network and stored in the node's mempool beforehand. Your schema couldn't change anything about this and once a block is generated, nodes have to validate it and if for some reason they've not been informed of the transaction already they have to fetch it anyway. Without sharding the state, it would be very hard to improve bandwidth requirement for the network. That been said, it is worth mentioning that the main challenge for scaling bitcoin is not the total bandwidth requirement for whispering transactions. Transactions occur with a random distribution in time and are naturally tolerant to be queued and delayed for few seconds, in a hypothetical 10,000 tps situation, it takes a moderate internet connection for a node to semi-synchronize its mempool with other peers. We could initiate a BIP for such a situation to improve transaction relay protocol in bitcoin networking layer by aggregating batches of fresh transactions and synchronizing them according to some convention like ordered set of 100 transaction ids falling into a specific criteria, ... The most critical problem with networking is the latencies involved in whispering new blocks and when nodes have no clue about some of included transactions, it is the basis for proximity premium flaw in bitcoin and its centralization threat because of its direct consequence: pooling pressure.
|
|
|
|
mechanikalk (OP)
|
|
November 21, 2018, 02:46:37 PM |
|
Sorry, but it is not correct. For transactions to be included in a block they have to be "whispered" in the network and stored in the node's mempool beforehand. Your schema couldn't change anything about this and once a block is generated, nodes have to validate it and if for some reason they've not been informed of the transaction already they have to fetch it anyway. Without sharding the state, it would be very hard to improve bandwidth requirement for the network.
Could you please be more specific in what you find wrong with the math in the example I gave above? The example is giving a simplified version of how a transaction is "whispered" across the network currently and with BlockReduce. In terms of mempool, I don't really think this effects bandwidth because it essentially adds capacitance to the system which can be ignored at steady-state. We could initiate a BIP for such a situation to improve transaction relay protocol in bitcoin networking layer by aggregating batches of fresh transactions and synchronizing them according to some convention like ordered set of 100 transaction ids falling into a specific criteria, ...
The most critical problem with networking is the latencies involved in whispering new blocks and when nodes have no clue about some of included transactions, it is the basis for proximity premium flaw in bitcoin and its centralization threat because of its direct consequence: pooling pressure.
I would like to work with you to initiate a BIP like this. I would propose that if we aggregate transactions we will need a way for the nodes to determine when they should share groups of transactions. Ideally the mechanism we come up with for determining when the groups share transaction should be decentralized so no single node can withhold transactions for an inordinate amount of time. There should also be a way for the nodes that receive the grouping of transactions to be able to verify the criteria was met. We could use something like a one-way mathematic function operating on an unpredictable data set that would periodically stochastically meet some arbitrary criteria. Then when the criteria is met the aggregating node could transmit the data to all of the other nodes. The other nodes would then be able to easily verify the criteria was met. We could improve upon this further if we created some redundancy for the aggregating node. This could be accomplished by having a small group of nodes with high connectivity that work together to aggregate a set of transactions amongst themselves before sharing to the larger network. Do you think something like this could work?
|
|
|
|
aliashraf
Legendary
Offline
Activity: 1456
Merit: 1175
Always remember the cause!
|
|
November 21, 2018, 06:38:57 PM Merited by Welsh (4), xtraelv (1) |
|
Sorry, but it is not correct. For transactions to be included in a block they have to be "whispered" in the network and stored in the node's mempool beforehand. Your schema couldn't change anything about this and once a block is generated, nodes have to validate it and if for some reason they've not been informed of the transaction already they have to fetch it anyway. Without sharding the state, it would be very hard to improve bandwidth requirement for the network.
Could you please be more specific in what you find wrong with the math in the example I gave above? The example is giving a simplified version of how a transaction is "whispered" across the network currently and with BlockReduce. In terms of mempool, I don't really think this effects bandwidth because it essentially adds capacitance to the system which can be ignored at steady-state. Looking closer to your calculations it is about comparing bitcoin transaction propagation with your schema's block reconciliation. These are two basically different things (and yet you are exaggerating about the situation with bitcoin). As I've argued before, both bitcoin p2p and your schema need the same amount of effort and resources to be consumed for raw transaction propagation because they are full nodes and you just can't dictate a predefined topology like a central authority or something. We could initiate a BIP for such a situation to improve transaction relay protocol in bitcoin networking layer by aggregating batches of fresh transactions and synchronizing them according to some convention like ordered set of 100 transaction ids falling into a specific criteria, ...
The most critical problem with networking is the latencies involved in whispering new blocks and when nodes have no clue about some of included transactions, it is the basis for proximity premium flaw in bitcoin and its centralization threat because of its direct consequence: pooling pressure.
I would like to work with you to initiate a BIP like this. I would propose that if we aggregate transactions we will need a way for the nodes to determine when they should share groups of transactions. Ideally the mechanism we come up with for determining when the groups share transaction should be decentralized so no single node can withhold transactions for an inordinate amount of time. There should also be a way for the nodes that receive the grouping of transactions to be able to verify the criteria was met. We could use something like a one-way mathematic function operating on an unpredictable data set that would periodically stochastically meet some arbitrary criteria. Then when the criteria is met the aggregating node could transmit the data to all of the other nodes. The other nodes would then be able to easily verify the criteria was met. We could improve upon this further if we created some redundancy for the aggregating node. This could be accomplished by having a small group of nodes with high connectivity that work together to aggregate a set of transactions amongst themselves before sharing to the larger network. Do you think something like this could work? Well, I'm ok with collateral work and most of the ideas above, but not with dedicated nodes it would be hard to implement an incentive mechanism. Right now, Greg Maxwell and others are working on a privacy focused improvement to transaction relay protocol in bitcoin, Dandelion. The good thing about this BIP is their strategy for delaying transaction relay procedure a while (to make it very hard if not impossible for surveillance services to track sender's IP) and it looks to be committed to the release code in near future! It will be a great opportunity to do more processing (classification, aggregation, ... ) while the transaction is queued waiting for relay.
|
|
|
|
|