Quantus
Legendary
Offline
Activity: 883
Merit: 1005
|
|
September 12, 2015, 05:31:21 PM Last edit: September 12, 2015, 05:51:17 PM by Quantus |
|
Any thing you compress has to be uncompressed on each node to be confirmed before it can be propagated out to the next node. This would slow propagation with the current block size.
However if a node was a few weeks/months/years behind then it may benefit from compressed 'blocks-of-Blocks'. This would require a lot of programming to set up and test.
Edit: I think adamstgBit should stop creating shitty threads on this topic its not helping anyone.
|
(I am a 1MB block supporter who thinks all users should be using Full-Node clients) Avoid the XT shills, they only want to destroy bitcoin, their hubris and greed will destroy us. Know your adversary https://www.youtube.com/watch?v=BKorP55Aqvg
|
|
|
CIYAM
Legendary
Offline
Activity: 1890
Merit: 1086
Ian Knowles - CIYAM Lead Developer
|
|
September 12, 2015, 05:34:46 PM |
|
Any thing you compress has to be uncompressed on each node to be confirmed before it can be propagated out to the next node. This would slow propagation with the current block size.
The whole point of Corallo has nothing to do with compression - it is to do with nodes already being aware of txs so blocks can just use txid's rather than the actual tx content. It is simply saving bandwidth in terms of information that was already communicated. The current @adamstgBit forum member seems to be completely unaware of this and thinks that some magic "compression" has been invented (am pretty sure the old @adamstgBit would have known better which makes it more likely that this account has been sold to a newbie).
|
|
|
|
jonald_fyookball
Legendary
Offline
Activity: 1302
Merit: 1008
Core dev leaves me neg feedback #abuse #political
|
|
September 12, 2015, 05:54:58 PM |
|
Any thing you compress has to be uncompressed on each node to be confirmed before it can be propagated out to the next node. This would slow propagation with the current block size.
The whole point of Corallo has nothing to do with compression - it is to do with nodes already being aware of txs so blocks can just use txid's rather than the actual tx content. It is simply saving bandwidth in terms of information that was already communicated. The current @adamstgBit forum member seems to be completely unaware of this and thinks that some magic "compression" has been invented (am pretty sure the old @adamstgBit would have known better which makes it more likely that this account has been sold to a newbie). Agree with both these points.
|
|
|
|
sAt0sHiFanClub
|
|
September 12, 2015, 07:13:51 PM |
|
Or you can explain it to me. I've nearly 15 years experience in writing network correlators and rating engines for the mobile teleco industry, so there is little you can teach me on HF data propagation that i don't know. Explain what? That the network can't handle 1GB blocks without completely redefining what Bitcoin is? What Matt Corallo's relay network does is trying to lower orphan rates for miners. It has nothing to do with increasing network throughput (tx rate), only lowers the amount of data to be transmitted with a block. After all, full nodes will still have to download full transaction data. Moreover, it depends on two unreliable assumptions: 1) participating miners are cooperative, i.e. only/mostly include txs that other miners have in their mempools. 2) that participants' mempools are highly synchronized. The latter is especially speculative when we try to project it onto the whole network. If we could make sure mempools are synchronized, we wouldn't need to have blockchain in the first place. But nodes' relay/mempool acception policy is highly customizable. E.g. during the recent stress test, many users had to increase their fee acception thresholds so that their nodes are stable. That means very different mempools for users. I don't see why you have to redefine what bitcoin is to increase transaction throughput. I think we are conflating 2 different aspects of the same issue. The orphan rate is a direct function of the complexity and scale of the p2p network, and the volume of data in each discrete unit. (blocks) There is currently a ~2% orphan rate which miners ( in their interest) would like to see reduced. So we [Matts relay network] do that by relaying only the information they need. They already have the tx's in the mempool, so all they need is the merkle root to confirm that the tx's they include match the MR in the block. Any tx's they dont have, they ask peers for. Its not compression, but it has the same effect as compression - redundant data is not resent. All fine and dandy. But one of the central pillars of smallblockism is that having larger blocks will result in increased orphan rates as the larger blocks take longer to propagate across the network. But Matts relay network, if applied to bitcoin as a whole rather than a select cadre of approved players, could deliver a performance increase that would obviate that argument. tl;dr Matts RN could have benfits both for the miners orphan concerns and the tx throughput ( more tx\s per block)
|
We must make money worse as a commodity if we wish to make it better as a medium of exchange
|
|
|
UserVVIP
|
|
September 12, 2015, 07:16:01 PM |
|
Don't you think that it is a little too much btc for that amount?
|
|
|
|
thejaytiesto
Legendary
Offline
Activity: 1358
Merit: 1014
|
|
September 12, 2015, 07:46:27 PM |
|
I think compression of blocks was already addressed by gmaxwell in here but i can't find the actual facts. In any case if this hasn't been considered as teh end all be all solutions agains the blocksize problem im sure there are drawbacks, so im pretty sure we will end up needing bigger blocks and blockstream type of tech anyway.
|
|
|
|
jonald_fyookball
Legendary
Offline
Activity: 1302
Merit: 1008
Core dev leaves me neg feedback #abuse #political
|
|
September 12, 2015, 07:55:34 PM |
|
Or you can explain it to me. I've nearly 15 years experience in writing network correlators and rating engines for the mobile teleco industry, so there is little you can teach me on HF data propagation that i don't know. Explain what? That the network can't handle 1GB blocks without completely redefining what Bitcoin is? What Matt Corallo's relay network does is trying to lower orphan rates for miners. It has nothing to do with increasing network throughput (tx rate), only lowers the amount of data to be transmitted with a block. After all, full nodes will still have to download full transaction data. Moreover, it depends on two unreliable assumptions: 1) participating miners are cooperative, i.e. only/mostly include txs that other miners have in their mempools. 2) that participants' mempools are highly synchronized. The latter is especially speculative when we try to project it onto the whole network. If we could make sure mempools are synchronized, we wouldn't need to have blockchain in the first place. But nodes' relay/mempool acception policy is highly customizable. E.g. during the recent stress test, many users had to increase their fee acception thresholds so that their nodes are stable. That means very different mempools for users. I don't see why you have to redefine what bitcoin is to increase transaction throughput. I think we are conflating 2 different aspects of the same issue. The orphan rate is a direct function of the complexity and scale of the p2p network, and the volume of data in each discrete unit. (blocks) There is currently a ~2% orphan rate which miners ( in their interest) would like to see reduced. So we [Matts relay network] do that by relaying only the information they need. They already have the tx's in the mempool, so all they need is the merkle root to confirm that the tx's they include match the MR in the block. Any tx's they dont have, they ask peers for. Its not compression, but it has the same effect as compression - redundant data is not resent. All fine and dandy. But one of the central pillars of smallblockism is that having larger blocks will result in increased orphan rates as the larger blocks take longer to propagate across the network. But Matts relay network, if applied to bitcoin as a whole rather than a select cadre of approved players, could deliver a performance increase that would obviate that argument. tl;dr Matts RN could have benfits both for the miners orphan concerns and the tx throughput ( more tx\s per block) I'm having trouble following this. "But one of the central pillars of smallblockism is that having larger blocks will result in increased orphan rates as the larger blocks take longer to propagate across the network. But Matts relay network, if applied to bitcoin as a whole rather than a select cadre of approved players, could deliver a performance increase that would obviate that argument." Doesn't the entire block need to be sent somewhere (at least from a miner to one of the nodes in the relay network)?
|
|
|
|
RoadTrain
Legendary
Offline
Activity: 1386
Merit: 1009
|
|
September 12, 2015, 08:23:50 PM |
|
I don't see why you have to redefine what bitcoin is to increase transaction throughput. That's quite a straw man here, I didn't say that, please don't overgeneralize. I think we are conflating 2 different aspects of the same issue. The orphan rate is a direct function of the complexity and scale of the p2p network, and the volume of data in each discrete unit. (blocks) There is currently a ~2% orphan rate which miners ( in their interest) would like to see reduced. So we [Matts relay network] do that by relaying only the information they need. They already have the tx's in the mempool, so all they need is the merkle root to confirm that the tx's they include match the MR in the block. Any tx's they dont have, they ask peers for. Its not compression, but it has the same effect as compression - redundant data is not resent. All fine and dandy. Once again, this all is based on a weak assumption that miners are cooperative -- in the worst case scenarion we are falling back on the regular propagation protocol. While the Matt's RN doesn't have any major downsides per se, it's effectively downplaying the issue at hand -- that in the worst case scenario the information to be transmitted scales linearly with block sizes. While it appears we can easily increase blocksizes thanks to Matt's RN, it's becoming worse in case of uncooperative behavior. But one of the central pillars of smallblockism is that having larger blocks will result in increased orphan rates as the larger blocks take longer to propagate across the network. But Matts relay network, if applied to bitcoin as a whole rather than a select cadre of approved players, could deliver a performance increase that would obviate that argument. I'd like to know how exactly Matt's RN would obviate it. It would mask it, yes, but it's not a magic bullet.
|
|
|
|
sAt0sHiFanClub
|
|
September 12, 2015, 09:22:47 PM |
|
I'm having trouble following this.
"But one of the central pillars of smallblockism is that having larger blocks will result in increased orphan rates as the larger blocks take longer to propagate across the network. But Matts relay network, if applied to bitcoin as a whole rather than a select cadre of approved players, could deliver a performance increase that would obviate that argument."
Doesn't the entire block need to be sent somewhere (at least from a miner to one of the nodes in the relay network)?
A block is a (header + n(tx)) The vast majority of the transactions should already be in the nodes mempool, so why do they have to be sent a second time? Obviously due to the nature of the network, not all nodes will have the exact same set of tx'x, but any missing ones can be requested. Also, the relay keeps track of what tx's have been sent. This is one of the long time conundrums of bitcoin - the redundant resending of transactions. You get a series of 10 byte ids instead of ~250b transactions.
|
We must make money worse as a commodity if we wish to make it better as a medium of exchange
|
|
|
jonald_fyookball
Legendary
Offline
Activity: 1302
Merit: 1008
Core dev leaves me neg feedback #abuse #political
|
|
September 12, 2015, 09:26:38 PM |
|
I'm having trouble following this.
"But one of the central pillars of smallblockism is that having larger blocks will result in increased orphan rates as the larger blocks take longer to propagate across the network. But Matts relay network, if applied to bitcoin as a whole rather than a select cadre of approved players, could deliver a performance increase that would obviate that argument."
Doesn't the entire block need to be sent somewhere (at least from a miner to one of the nodes in the relay network)?
A block is a (header + n(tx)) The vast majority of the transactions should already be in the nodes mempool, so why do they have to be sent a second time? Obviously due to the nature of the network, not all nodes will have the exact same set of tx'x, but any missing ones can be requested. Also, the relay keeps track of what tx's have been sent. This is one of the long time conundrums of bitcoin - the redundant resending of transactions. You get a series of 10 byte ids instead of ~250b transactions. Ok assuming the miner only sends a condensed version of the block with pointers to the relay network, the relay network still has to broadcast the full block then to other nodes, correct?
|
|
|
|
sAt0sHiFanClub
|
|
September 12, 2015, 09:35:07 PM |
|
I don't see why you have to redefine what bitcoin is to increase transaction throughput. That's quite a straw man here, I didn't say that, please don't overgeneralize. Lets not split hairs. You said 1GB blocks require a redefine of bitcoin. Larger blocks have more tx's. Blocks are fixed in time. More tx's/constant time = higher tx rate. I think we are conflating 2 different aspects of the same issue. The orphan rate is a direct function of the complexity and scale of the p2p network, and the volume of data in each discrete unit. (blocks) There is currently a ~2% orphan rate which miners ( in their interest) would like to see reduced. So we [Matts relay network] do that by relaying only the information they need. They already have the tx's in the mempool, so all they need is the merkle root to confirm that the tx's they include match the MR in the block. Any tx's they dont have, they ask peers for. Its not compression, but it has the same effect as compression - redundant data is not resent. All fine and dandy. Once again, this all is based on a weak assumption that miners are cooperative -- in the worst case scenarion we are falling back on the regular propagation protocol. While the Matt's RN doesn't have any major downsides per se, it's effectively downplaying the issue at hand -- that in the worst case scenario the information to be transmitted scales linearly with block sizes. While it appears we can easily increase blocksizes thanks to Matt's RN, it's becoming worse in case of uncooperative behavior. But one of the central pillars of smallblockism is that having larger blocks will result in increased orphan rates as the larger blocks take longer to propagate across the network. But Matts relay network, if applied to bitcoin as a whole rather than a select cadre of approved players, could deliver a performance increase that would obviate that argument. I'd like to know how exactly Matt's RN would obviate it. It would mask it, yes, but it's not a magic bullet. As it is, its just a step in the right direction, but I'm also saying that it is an idea that can be developed and deployed across the network in general. But, yeah, i don't think its a magic bullet, but it is certainly an indicator that positive thought exists in Bitcoin, and solutions to its inherent problems can be found.
|
We must make money worse as a commodity if we wish to make it better as a medium of exchange
|
|
|
sAt0sHiFanClub
|
|
September 12, 2015, 09:40:38 PM |
|
I'm having trouble following this.
"But one of the central pillars of smallblockism is that having larger blocks will result in increased orphan rates as the larger blocks take longer to propagate across the network. But Matts relay network, if applied to bitcoin as a whole rather than a select cadre of approved players, could deliver a performance increase that would obviate that argument."
Doesn't the entire block need to be sent somewhere (at least from a miner to one of the nodes in the relay network)?
A block is a (header + n(tx)) The vast majority of the transactions should already be in the nodes mempool, so why do they have to be sent a second time? Obviously due to the nature of the network, not all nodes will have the exact same set of tx'x, but any missing ones can be requested. Also, the relay keeps track of what tx's have been sent. This is one of the long time conundrums of bitcoin - the redundant resending of transactions. You get a series of 10 byte ids instead of ~250b transactions. Ok assuming the miner only sends a condensed version of the block with pointers to the relay network, the relay network still has to broadcast the full block then to other nodes, correct? Correct - while the relay backbone remains a separate network, then yes. (but not over the relay network - they get propagated over the vanilla p2p as far as i know) But I can imagine a case where it could be extended to a wider network.
|
We must make money worse as a commodity if we wish to make it better as a medium of exchange
|
|
|
jonald_fyookball
Legendary
Offline
Activity: 1302
Merit: 1008
Core dev leaves me neg feedback #abuse #political
|
|
September 12, 2015, 09:45:47 PM |
|
I'm having trouble following this.
"But one of the central pillars of smallblockism is that having larger blocks will result in increased orphan rates as the larger blocks take longer to propagate across the network. But Matts relay network, if applied to bitcoin as a whole rather than a select cadre of approved players, could deliver a performance increase that would obviate that argument."
Doesn't the entire block need to be sent somewhere (at least from a miner to one of the nodes in the relay network)?
A block is a (header + n(tx)) The vast majority of the transactions should already be in the nodes mempool, so why do they have to be sent a second time? Obviously due to the nature of the network, not all nodes will have the exact same set of tx'x, but any missing ones can be requested. Also, the relay keeps track of what tx's have been sent. This is one of the long time conundrums of bitcoin - the redundant resending of transactions. You get a series of 10 byte ids instead of ~250b transactions. Ok assuming the miner only sends a condensed version of the block with pointers to the relay network, the relay network still has to broadcast the full block then to other nodes, correct? Correct - while the relay backbone remains a separate network, then yes. (but not over the relay network - they get propagated over the vanilla p2p as far as i know) But I can imagine a case where it could be extended to a wider network. Well then as much as I don't like to agree with the small blockers, their argument is correct that orphan rates will increase since the full block needs at some point to be broadcast. Although as I pointed out to Adam, would need to be much bigger (60mb) before this is a problem with current Internet speeds.
|
|
|
|
RoadTrain
Legendary
Offline
Activity: 1386
Merit: 1009
|
|
September 12, 2015, 09:46:41 PM |
|
I don't see why you have to redefine what bitcoin is to increase transaction throughput. That's quite a straw man here, I didn't say that, please don't overgeneralize. Lets not split hairs. You said 1GB blocks require a redefine of bitcoin. Larger blocks have more tx's. Blocks are fixed in time. More tx's/constant time = higher tx rate. That's exactly an informal fallacy. Larger blocks mean more txs, BUT more txs don't necessarily mean larger blocks. You are equalizing them. I have said 1GB blocks require a redefine of bitcoin. I haven't said you have to redefine what bitcoin is to increase transaction throughput. But you have weakened/replaced my argument to make it easier to refute -- straw man.
|
|
|
|
sAt0sHiFanClub
|
|
September 12, 2015, 10:06:26 PM |
|
I don't see why you have to redefine what bitcoin is to increase transaction throughput. That's quite a straw man here, I didn't say that, please don't overgeneralize. Lets not split hairs. You said 1GB blocks require a redefine of bitcoin. Larger blocks have more tx's. Blocks are fixed in time. More tx's/constant time = higher tx rate. That's exactly an informal fallacy. Larger blocks mean more txs, BUT more txs don't necessarily mean larger blocks. You are equalizing them. Whaaat? Bitcoin is engineered to generate a block once every ~10 minutes. That is set in stone. So of course more transactions mean larger blocks - unless you are shrinking the transaction size What you said makes no logical sense. I have said 1GB blocks require a redefine of bitcoin. I haven't said you have to redefine what bitcoin is to increase transaction throughput. But you have weakened/replaced my argument to make it easier to refute -- straw man.
The tx throughput can vary, the rate of block creation is fixed. We can have as many transactions as users generate, but we still have the same number of blocks. edit: Maybe we are getting hung up on the 1Gb thing. Same holds true for 2Mb, 4Mb ..... 32Mb blocks. Above 32Mb, you need to change how bitcoin sends messages, but thats academic to this discussion.
|
We must make money worse as a commodity if we wish to make it better as a medium of exchange
|
|
|
RoadTrain
Legendary
Offline
Activity: 1386
Merit: 1009
|
|
September 12, 2015, 10:29:56 PM |
|
@sAt0sHiFanClub, I don't know your definition of a transaction, but if we take the Lightning Network as an example, it does increase transaction throughput without neccessarily increasing the blocksize limit. Large blocks is not the only way of increasing throughput.
|
|
|
|
Quantus
Legendary
Offline
Activity: 883
Merit: 1005
|
|
September 12, 2015, 10:51:26 PM |
|
I'm having trouble following this.
"But one of the central pillars of smallblockism is that having larger blocks will result in increased orphan rates as the larger blocks take longer to propagate across the network. But Matts relay network, if applied to bitcoin as a whole rather than a select cadre of approved players, could deliver a performance increase that would obviate that argument."
Doesn't the entire block need to be sent somewhere (at least from a miner to one of the nodes in the relay network)?
A block is a (header + n(tx)) The vast majority of the transactions should already be in the nodes mempool, so why do they have to be sent a second time? Obviously due to the nature of the network, not all nodes will have the exact same set of tx'x, but any missing ones can be requested. Also, the relay keeps track of what tx's have been sent. This is one of the long time conundrums of bitcoin - the redundant resending of transactions. You get a series of 10 byte ids instead of ~250b transactions. This really helped me understand your argument. It would be great if this was implemented but it still would not address the issues of blockchain storage or the threat of spam.
|
(I am a 1MB block supporter who thinks all users should be using Full-Node clients) Avoid the XT shills, they only want to destroy bitcoin, their hubris and greed will destroy us. Know your adversary https://www.youtube.com/watch?v=BKorP55Aqvg
|
|
|
jonald_fyookball
Legendary
Offline
Activity: 1302
Merit: 1008
Core dev leaves me neg feedback #abuse #political
|
|
September 12, 2015, 10:54:33 PM |
|
@sAt0sHiFanClub, I don't know your definition of a transaction, but if we take the Lightning Network as an example, it does increase transaction throughput without neccessarily increasing the blocksize limit. Large blocks is not the only way of increasing throughput.
As far as I'm aware the LN is off the main chain so it's irrevlevant to actually scaling main chain.
|
|
|
|
sAt0sHiFanClub
|
|
September 12, 2015, 11:00:33 PM |
|
@sAt0sHiFanClub, I don't know your definition of a transaction, but if we take the Lightning Network as an example, it does increase transaction throughput without neccessarily increasing the blocksize limit. Large blocks is not the only way of increasing throughput.
Your going off on a tangent now. LN is paperware at the moment. Lets stick to bitcoin for now, eh?
|
We must make money worse as a commodity if we wish to make it better as a medium of exchange
|
|
|
RoadTrain
Legendary
Offline
Activity: 1386
Merit: 1009
|
|
September 12, 2015, 11:07:22 PM |
|
@sAt0sHiFanClub, I don't know your definition of a transaction, but if we take the Lightning Network as an example, it does increase transaction throughput without neccessarily increasing the blocksize limit. Large blocks is not the only way of increasing throughput.
Your going off on a tangent now. LN is paperware at the moment. Lets stick to bitcoin for now, eh? So your definition of a transaction only includes on-chain transactions, then your statement is correct. My definition also includes trustless off-chain transactions (one way of scaling Bitcoin), and under that definition mine is correct. Oh, we really have to be precise...
|
|
|
|
|