Bitcoin Forum
October 02, 2025, 07:43:55 PM *
News: Latest Bitcoin Core release: 29.0 [Torrent]
 
  Home Help Search Login Register More  
  Show Posts
Pages: [1] 2 3 4 5 »
1  Bitcoin / Development & Technical Discussion / Re: Improved Measurement of Proof-of-Work using entropy on: March 14, 2024, 03:19:37 PM
Use of the apparent difficult has been proposed many times before-- it and other schemes that result in a unique value for each block which is calcuable by the block finder immediately lead to a withholding attack:  If you know your block is unusually good, such that it is likely to win even if announced late, you are better off not announcing it.  The withholding profits are proporitional to your share of hash power (as thats how often your advantage gets you a second block) so it's also a centralization pressure.

Bitcoin's criteria has always been that each node adopt the first block it sees which meets the target, which creates an incentive to announce (one which breaks down if your share of the hashpower is over a large threshold, but if any party controls that much hashpower the system has other worse problems).


First off, I really appreciate you taking the time to reply. I am aware of the previous proposals to use lexigraphical mechanisms to create a preference in block choice.  However, the proposal here is different in that it has two key new realizations that although subtle are what make this powerful. I know the paper is very dense so I will try to give a more intuitive idea of what is going on.

1) Although each hash is an independent event, each block is a dependent event because it references the hash of the parent on which it is built.
2) The weighting of each block should be based on the expected number of hashes that are needed to reproduce the chain of blocks specified by the tip block.

These two ideas when added together allows us to propose a new weighting mechanism, entropy reduction, which eliminates issues with regards to withholding attacks.

When a block is produced the number of bits (~ leading binary zeros, or = log2(hash)) which exceed the block threshold value follow an exponential distribution b = e^x which has a mean expectation (lambda) of 1. This means a miner will expect on average to get 1 more bit than the threshold, but can achieve many bits within the exponential distribution. However, they could get
luck and get > 1 extra bit or they could get unlucky and get < 1 extra bit. The interesting thing here is that even if they get lucky and say get 3 extra bits there is still a 12.5% chance that if there is a competing block produces they will lose. The likely hood of loosing never goes to zero.

In addition, the weighting using entropy is adding bits, which is the same as multiplying in the linear field. Specifically, Bitcoin: Total Difficulty(n) = diff(n) + Total Difficulty(n-1) whereas PoEM is: Total Difficulty(n) = diff(n) * Total Difficulty(n-1) .This means that even if a miner was to get super "lucky" and find 20 extra bits (1 in 1 million blocks) the extra entropy would only count for 1/3rd of a normal block and they would still have a 0.0000001% chance of losing if they withheld the block. This means that no matter
how lucky a miner gets, they will always maximize revenue by broadcasting the block as quickly as possible.  This actually reduces the withholding attack that exists within the current difficulty weighting system.

If we took the same example, but use the current method of adding difficulties to determine block weight, the withholding period for finding 20 extra bits would be 1 million blocks.  So compare 1 million to 1. This example, elucidates that this is a very different proposal than the simple proposal of using apparent difficulty. Rather this proposal is doing two things. First, It is realizing apparent
difficulty is beneficial because it creates a rule which makes heavier chains and it actually adds randomness into winning a block to prevent withholding. Second, it is using the proper combinatorics, treating blocks as a series of dependent events, to weight them which in turn solves all of the issues with using apparent difficulty.

I hope that provides some intuition. In the paper we have shown that this is indeed the case both empirically as well as experimentally. If you look at the graph on page 12 we show that for any g (block production rate per network delay) that the confirmation delay is smaller using this method compared to the current Bitcoin weighting.



We also explored the concept of concentration or looking at how the outcome distribution effects the statistical convergence.  Bitcoin would be a binomial distribution and PoEM would be the biased gamma.

 

PS: Another way to think about this is that there is infinite variance in a single sample, so reintroducing randomness in the weighting, creates a better outcome because it means that a miner that has <50% of the hashrate is less likely to get sequentially lucky and produce a sequence of N blocks that is longer than the honest majority.

2  Bitcoin / Development & Technical Discussion / Re: Improved Measurement of Proof-of-Work using entropy on: March 02, 2024, 04:46:46 AM
There is now formal security proof showing the safety and liveness of Proof-of-Entropy-Minima (PoEM).  https://eprint.iacr.org/2024/200

This relatively simple adjustment to the measurement of block weight has been empirically shown to: prevent selfish mining, create faster finalization time, increase throughput, and decrease block times while maintaining safety and liveness of the chain.  Modifications to the current heaviest chain rule are very straight forward.

The TL;DR is:

Presently the heaviest chain rule does not treat blocks as being dependent events even though they are dependent because of the hash link to the parent.

PoEM uses combinatorics to weigh the dependent block events appropriately. This generates a guaranteed unique weight for each block which is the average number of hashes, in expectation, that would be required to produce a chain of equal weight.

The guaranteed unique weight prevents the network from mining on competing forks because the likelyhood of having blocks of equal weight is (1/2^256) ~= 0

Additionally, the process naturally introduces randomization on each sample (ie block) which prevents profitable withholding attacks.

Finally, it shows that the threshold for a Sybil attack can be improved from 33% to 40%.
3  Bitcoin / Development & Technical Discussion / Re: Improved Measurement of Proof-of-Work using entropy on: March 11, 2023, 08:27:09 PM
Hold on.

The first lesson in Probability Theory class is the following.

Whenever you approach a problem, the first thing you should do is to define a probability space. If the probability space is not defined, then all these concepts don't make sense. There is no "probability", there is no "expectation" and "divergence". Also there is no "entropy" and "Shanon entropy".

The probability space has not been defined in the paper. If the choice of the probability space is obvious, then you can easily fill this gap and define probability space instead of the author.



They do define the probability space 2^l which in the case of bitcoin is 2^256.
4  Bitcoin / Development & Technical Discussion / Re: Improved Measurement of Proof-of-Work using entropy on: March 11, 2023, 08:15:25 PM
This paper POEM: Proof-of-Entropy-Minima (https://arxiv.org/abs/2303.04305) was just published to arxiv.  It seems like a much better way to measure the heaviest chain tip as well as minimize time to resolve orphans.  It can instantaneously resolve 67% of orphans rather than having to wait for the next block.  Additionally, it seems to have better finalization time guarantees for a given hash.  Also, it has an equation that relates to finalization that would create objective measurable preference between different hash functions.

No, it's not. The author has made a very common mistake. This mistake is a consequence of a superficial understanding of probability theory.

Satoshi has made a good job and his calculation of the chain weight is correct.

If you disagree with me we can dive into details.

Please tell me what you think the mistake in probability theory is.

Thanks in advanced.


There is no "mistake in probability theory". Don't distort my message, please.

What is your personal opinion about this paper? Do you think it is correct?

I think the paper is correct.

In your opinion, what definition of "entropy" does the author use through out the paper?

delta_S = 1/2^n where n is the number of leading zeros.  This makes sense from a Shannon entropy concept where entropy is a reduction in divergence and it also makes sense from a system entropy standpoint as well were miners exert work to lower the entropy and increase the system order.
5  Bitcoin / Development & Technical Discussion / Re: Improved Measurement of Proof-of-Work using entropy on: March 11, 2023, 07:15:50 PM
Quote
Current difficulty based weighting systems do not take the intrinsic block weight into account.
And it is good that they don't. Because this change will lower requirements for chain reorganization. If I understand it correctly, this proposal is about using chainwork based on the current block, instead of difficulty. Those kind of changes were also discussed in other topics.

If you look at the paper it is not just proposing you change chain work but that you simultaneously change the calculation of tip weight from very roughly being Td_new = Td_old + Td_threshold to Td_new = Td_old * Td_chainwork.  That would make both chain work and chain weight geometric which would actually improve finalization time by minimizing conflicts and maximizing recorded work.
6  Bitcoin / Development & Technical Discussion / Re: Improved Measurement of Proof-of-Work using entropy on: March 11, 2023, 06:36:03 PM
This paper POEM: Proof-of-Entropy-Minima (https://arxiv.org/abs/2303.04305) was just published to arxiv.  It seems like a much better way to measure the heaviest chain tip as well as minimize time to resolve orphans.  It can instantaneously resolve 67% of orphans rather than having to wait for the next block.  Additionally, it seems to have better finalization time guarantees for a given hash.  Also, it has an equation that relates to finalization that would create objective measurable preference between different hash functions.

No, it's not. The author has made a very common mistake. This mistake is a consequence of a superficial understanding of probability theory.

Satoshi has made a good job and his calculation of the chain weight is correct.

If you disagree with me we can dive into details.

Please tell me what you think the mistake in probability theory is.

Thanks in advanced.


There is no "mistake in probability theory". Don't distort my message, please.

What is your personal opinion about this paper? Do you think it is correct?

I think the paper is correct.
7  Bitcoin / Development & Technical Discussion / Re: Improved Measurement of Proof-of-Work using entropy on: March 11, 2023, 06:10:48 PM
This paper POEM: Proof-of-Entropy-Minima (https://arxiv.org/abs/2303.04305) was just published to arxiv.  It seems like a much better way to measure the heaviest chain tip as well as minimize time to resolve orphans.  It can instantaneously resolve 67% of orphans rather than having to wait for the next block.  Additionally, it seems to have better finalization time guarantees for a given hash.  Also, it has an equation that relates to finalization that would create objective measurable preference between different hash functions.

No, it's not. The author has made a very common mistake. This mistake is a consequence of a superficial understanding of probability theory.

Satoshi has made a good job and his calculation of the chain weight is correct.

If you disagree with me we can dive into details.

Please tell me what you think the mistake in probability theory is.

Thanks in advanced.
8  Bitcoin / Development & Technical Discussion / Improved Measurement of Proof-of-Work using entropy on: March 10, 2023, 11:08:28 PM
This paper POEM: Proof-of-Entropy-Minima (https://arxiv.org/abs/2303.04305) was just published to arxiv.  It seems like a much better way to measure the heaviest chain tip as well as minimize time to resolve orphans.  It can instantaneously resolve 67% of orphans rather than having to wait for the next block.  Additionally, it seems to have better finalization time guarantees for a given hash.  Also, it has an equation that relates to finalization that would create objective measurable preference between different hash functions.
9  Bitcoin / Development & Technical Discussion / Re: BlockReduce: Scaling Blockchain to human commerce on: December 27, 2019, 11:33:03 PM

Then I would debate that the statement, "allowing it to scale to handle tens of thousands of transactions per second without impacting fault tolerance or decentralization", is u true.


The actual effect on fault tolerance and decentralization has to be put into context that currently over 50% of Bitcoin's hashpower comes from only 4 pools.  As BlockReduce scales the node requirements to have a node which does partial state validation would be much less then a if Bitcoin scaled in its current state.  That would mean that although there may be fewer people validating full state, there will be more people, and fewer pools validating partial state. I would argue that having partially validating mining nodes is advantageous over having a deminimis number of pools.  Having smaller economic entities decide on the fate of the protocol rather than a few large pools would be positive for the ecosystem.


Is to REALLY scale out the network is = more partially validating nodes, but fewer fully validating nodes?

That goes the opposite path of what you said below. Or might I have misunderstood?

https://bitcointalk.org/index.php?topic=5060909.msg53240986#msg53240986


Yes, scaling the network is adding more network participants, this is accomplished through scaling.


More participants partially validating, which won't be part of the whole network, and less participants fully validating is centralizing, making the network smaller. It is anti-scaling.

I think you should more holistically consider the meaning of centralization.  If I can't go to 7-11 and buy a coke with Bitcoin it is not fully decentralized.  If I need to have 3rd parties involved in a transaction it is not fully decentralized.  If I need to use centralized exchanges to trade with good liquidity it is not fully decentralized.  If it costs $200 to make a transaction it is pricing out network participants and small transactions which is not fully decentralized.

The more people that use Bitcoin, not just the number of people running nodes, is critical in answering the question of is it is decentralized.  Additionally, to have the largest network with the most particpants (most decentralized...?), I would argue that Bitcoin needs to scale on-chain.


Growing node requirements/costs would only make node count go down, not up. Block Reduce might increase transaction throughput, but it's centralizing.


If there are benefits such as a greater number of users, and increased utility at a lower cost, the marginal degree of centralization (fewer fully validating nodes) may very well be worth it.  However, I would contend that with larger user base even if the cost of running a fully validating node increases, the absolute number of full nodes would likely go up not down even if the relative number shrinks.
10  Bitcoin / Development & Technical Discussion / Re: BlockReduce: Scaling Blockchain to human commerce on: December 10, 2019, 12:31:42 AM

Then I would debate that the statement, "allowing it to scale to handle tens of thousands of transactions per second without impacting fault tolerance or decentralization", is u true.


The actual effect on fault tolerance and decentralization has to be put into context that currently over 50% of Bitcoin's hashpower comes from only 4 pools.  As BlockReduce scales the node requirements to have a node which does partial state validation would be much less then a if Bitcoin scaled in its current state.  That would mean that although there may be fewer people validating full state, there will be more people, and fewer pools validating partial state. I would argue that having partially validating mining nodes is advantageous over having a deminimis number of pools.  Having smaller economic entities decide on the fate of the protocol rather than a few large pools would be positive for the ecosystem.


Is to REALLY scale out the network is = more partially validating nodes, but fewer fully validating nodes?

That goes the opposite path of what you said below. Or might I have misunderstood?

https://bitcointalk.org/index.php?topic=5060909.msg53240986#msg53240986


Yes, scaling the network is adding more network participants, this is accomplished through scaling.  The requirement that all market participants be fully validating nodes is a flaw not a virtue.  BlockReduce allows a larger number of incrementally more expensive ways of participating in the network while also scaling.  I think this is better than an all or nothing approach.  Additionally, when calculating market participants you should consider Bitcoin users in addition to nodes and miners as a metric of success.



11  Bitcoin / Development & Technical Discussion / Re: Blockrope - Internal Parrallel Intertwined Blockchains - scaleability related on: December 10, 2019, 12:14:24 AM
A more thought out version of this is BlockReduce.

https://bitcointalk.org/index.php?topic=5060909.0

Overview paper:

https://arxiv.org/pdf/1811.00125.pdf
12  Bitcoin / Development & Technical Discussion / Re: BlockReduce: Scaling Blockchain to human commerce on: December 07, 2019, 12:07:37 AM

Then I would debate that the statement, "allowing it to scale to handle tens of thousands of transactions per second without impacting fault tolerance or decentralization", is u true.


The actual effect on fault tolerance and decentralization has to be put into context that currently over 50% of Bitcoin's hashpower comes from only 4 pools.  As BlockReduce scales the node requirements to have a node which does partial state validation would be much less then a if Bitcoin scaled in its current state.  That would mean that although there may be fewer people validating full state, there will be more people, and fewer pools validating partial state. I would argue that having partially validating mining nodes is advantageous over having a deminimis number of pools.  Having smaller economic entities decide on the fate of the protocol rather than a few large pools would be positive for the ecosystem.


You don't believe that that will centralize Bitcoin toward the miners? Or you don't believe that users/economic majority should have the ability to run their own full nodes?


I think that people often times fall into tired narratives about majority of users, and fairness, et cetera without fully considering what any of it really means, or why it might be good or bad.  I would argue that if Bitcoin is meant to be censorship resistant and decentralized, that it must allow the greatest number of people to use it with the fewest intermediaries possible. Making low resource validation the primary focus of decentralization misses the point.  If even 20% of a population self custodianed Bitcoin which they regularly used for transactions it would be effectively impossible to censor or outlaw. When we discuss decentralization taking into account the power of the network which scales should also be a consideration, not just how easily it is validated.
13  Bitcoin / Development & Technical Discussion / Re: BlockReduce: Scaling Blockchain to human commerce on: December 06, 2019, 11:49:25 PM
Hi there! I was looking through some old research papers about merge-mining and came upon this thread. I'm very interested in your proposal as it seems like a great way to shard state without losing security via merge mining! I have a question for you though: If miners have to verify all the state that passes up to Prime, they have to run a full node so that they have the state of all the blockchains to properly verify everything. They are incentivized to do this so that they don't mine invalid blocks, but in doing so they might put a strain on the network because their zone and region nodes are not necessarily in the same geographic region as the rest of the zone and region nodes. (Of course, the zone that the miner is located in will be optimal, but I am talking about the rest of the zones and regions necessary for running a full state node).
For example, for n zones, n/2 regions, and m miners running a full state node, we have m - 1 latent nodes in each zone (or n*(m-1) latent zone nodes total) and m - 1 latent nodes in each region (or (n/2)*(m-1) latent region nodes total). Do you consider this an issue for network latency? Is there perhaps some way or incentive for a miner to run a full node and also run each sub-node (zone and region) in the proper geographic location? This might be physically difficult without the use of some cloud provider like AWS.

Looking forward to hearing more! Thanks.

Coopex, great question!  Sorry, it has taken me a bit to get back to you. The miner is incentivized to hold zone state which they are not mining because it reduces the risk they will include a zone block in a region block which eventually gets rolled back in the zone.  If they were to wait or delay including zone blocks in the region blocks, they could also achieve greater certainty, however they would get lower rewards.  Running the alternate zone state allows them to have greater certainty about a zone block faster.  Doing so with a node which is appropriately placed in the network topology will decrease that nodes latency and further decrease risk.  Therefore, miners will be incentivized to keep state and do so in a network optimal way.  I would absolutely expect that a person running full state would do so using something like AWS to allow optimization of the geographic placement of nodes.

Thanks for your response! I see now that miners are incentivized to run all of their nodes in the least latent way possible. However, miners might not physically be able to do so without moving their mining operation outside of the zone that they operate in, unless they want to pay a cloud provider to host it for them - which may not work if the cloud provider does not offer server hosting close enough or with proper precision to the geographic location of the zone. Perhaps a business could evolve to host servers in close proximity to every zone and move them around when necessary, kind of like high frequency trading does with the stock market, but even then you'd have the business be a centralizing factor.

In any case, my question is more general. Do you consider it an issue if some of the nodes in a zone are more latent than others? Are there bandwidth concerns with users or miners who run latent nodes? What if I just have a really shitty internet connection - could I be causing bandwidth issues for the network, or am I just causing issues for myself?

Thank you for your responses!

That is a pretty insightful question.  However, the answer is pretty simple.  With all distributed networks from things like Napster to Bitcoin you always have a seed and a leech problem.  In the example of Napster it is driven by storage space more than bandwidth.  In the context of Bitcoin it is driven by bandwidth more than storage.  Therefore, if you are a Bitcoin node that has low bandwidth you are slowing the overall network down (leech), whereas if you have high bandwidth you are speeding it up (seed).  BlockReduce is not different from Bitcoin in this regard. However, BlockReduce rewards mining in the zone chains creating an economic incentive for participants who are mining to optimize for latency.
14  Bitcoin / Development & Technical Discussion / Re: BlockReduce: Scaling Blockchain to human commerce on: December 02, 2019, 02:47:48 PM
Hi there! I was looking through some old research papers about merge-mining and came upon this thread. I'm very interested in your proposal as it seems like a great way to shard state without losing security via merge mining! I have a question for you though: If miners have to verify all the state that passes up to Prime, they have to run a full node so that they have the state of all the blockchains to properly verify everything. They are incentivized to do this so that they don't mine invalid blocks, but in doing so they might put a strain on the network because their zone and region nodes are not necessarily in the same geographic region as the rest of the zone and region nodes. (Of course, the zone that the miner is located in will be optimal, but I am talking about the rest of the zones and regions necessary for running a full state node).
For example, for n zones, n/2 regions, and m miners running a full state node, we have m - 1 latent nodes in each zone (or n*(m-1) latent zone nodes total) and m - 1 latent nodes in each region (or (n/2)*(m-1) latent region nodes total). Do you consider this an issue for network latency? Is there perhaps some way or incentive for a miner to run a full node and also run each sub-node (zone and region) in the proper geographic location? This might be physically difficult without the use of some cloud provider like AWS.

Looking forward to hearing more! Thanks.

Coopex, great question!  Sorry, it has taken me a bit to get back to you. The miner is incentivized to hold zone state which they are not mining because it reduces the risk they will include a zone block in a region block which eventually gets rolled back in the zone.  If they were to wait or delay including zone blocks in the region blocks, they could also achieve greater certainty, however they would get lower rewards.  Running the alternate zone state allows them to have greater certainty about a zone block faster.  Doing so with a node which is appropriately placed in the network topology will decrease that nodes latency and further decrease risk.  Therefore, miners will be incentivized to keep state and do so in a network optimal way.  I would absolutely expect that a person running full state would do so using something like AWS to allow optimization of the geographic placement of nodes.
15  Bitcoin / Development & Technical Discussion / Re: BlockReduce: Scaling Blockchain to human commerce on: December 02, 2019, 02:37:13 PM

The proposal basically deals with the bandwidth problem of on-chain scaling, trading it off against trust that miners are doing the proper cross-shard checks that they're supposed and incentivized to. What it fails to do making the whole chain fully verifiable by a typical desktop computer, as should be apparent from "the total chain will require around 8 Tb/year of storage".


Tromp, I appreciate the time that you have taken to look at BlockReduce.  One thing that I would debate is the use of the word sharding.  Although, a miner can depend upon a zone blocks work as an attestation to the correctness of the included transactions, they are not required to.  Much like an SPV node doesn't have to keep the entire chainstate but rather just looks at a block header.  This is not sharding per say, but rather a mode of operation that a node can work within to use less resources.  I would anticipate that serious miners or pools will run and validate full state because they have an economic incentive to do so, while merchants will likely run partial state much like SPV. 

Another way to think about BlockReduce is as a form of multi-level erlay where "sketches" are sent when a zone, or region block is found rather than an arbitrary delay.  The obvious difference being that actual sub-blocks are found which are rewarded to incentivize miners to self organize in a network optimal way.
16  Economy / Games and rounds / Re: Find 2 ETH hidden in plain sight on: May 07, 2019, 04:24:14 PM
Round 2

The current hints for the 2 ETH are:

May 6th: The CAs
May 7th: I think you should go home now Devin

The previous hints can be disregarded as Round 1 to find 1 ETH has already been found.
17  Economy / Games and rounds / Re: Find 1 ETH hidden in plain sight on: May 06, 2019, 06:34:27 PM
Here is the solution to last weeks Puzzle for 1 ETH. This corresponds to this address which had 1 ETH.

Checkout our new puzzle for a chance to win 2 ETH!

18  Economy / Games and rounds / Re: Find 1 ETH hidden in plain sight on: May 03, 2019, 01:54:12 PM
The current hints for finding the 1 ETH are:

May 2nd: ECC
May 3rd: P=Q

Retweet today's hint for a chance to get tomorrow's hint today!
19  Economy / Games and rounds / Find 2 ETH hidden in plain sight on: May 02, 2019, 06:52:26 PM
GridPlus 1 2 ETH Treasure Hunt





In preparing for the launch of the GridPlus Lattice1, we have setup a little puzzle for everyone. There is 1 2 ETH hidden in the pattern of the Lattice1 box. Think you can find it, try your guess at gridplus . If you need a hint 1 in 10 people that retweet the last hint will get the next hint early. Learn more at GridPlus.io




 
20  Bitcoin / Development & Technical Discussion / Re: [Scaling] Minisketch - Unmoderated on: January 05, 2019, 07:47:12 PM
Firstly, thank you gmaxwell for taking the time to answer my question.  I very much appreciate it.

Transactions announcements to other peers are already delayed for bandwidth reduction (because announcing may at once takes asymptotically about one forth the bandwidth: since ip+tcp+bitcoin have overheads similar to the size of one announcement and the delays usually prevents the same transaction from being announced both ways across a single link) and privacy reasons.  The delays are currently a poisson process with an expected delay of 5 seconds between a node transmitting to all inbound connecting peers. Outbound connections each see 2 second expected delay.

Is the proposal with minisketch to keep the same delay that is currently used?  Given the current rate of transactions on the network, what would be the anticipated bandwidth savings with minisketch (assuming delay remains unchanged)?


Quote
selfish and resource optimization behavior especially against small nodes
All of the computational work in reconciliation is performed by the party that initiates the reconciliation.  So a third party cannot cause you to expend more resources than you want to expend on it.  We've been doing protocol design assuming the need to support rpi3 nodes without an undue burden; which was part of the reason minisketch was required since prior bandwidth efficient reconciliation code was far too slow for limited hardware like that.  rpi3 is more or less the bottom end for running a full node-- already that target takes 20+ days to synchronize the chain.  Slower could still be used, but it would presumably reconcile less often.

With minisketch wouldn't a node need to compute and transmit different sketches for each one of its peers?

Quote
The theoretical propagation time of data across the network is actually the upper bound on network throughput.
Not at all, Bitcoin is already a batch system-- blocks show up at random times and commit to a chunk of transactions, whatever is missed and still valid goes into a subsistent block. Both because blocks are relatively infrequent and because the mining process itself has seconds of latency (e.g. from batching new block templates to mining devices to avoid creating extreme loads on block template servers) the existing delays have little effect on the transaction handling delays.

More fundamentally, the connection you believe exists between tx propagation time and network throughput just doesn't exist:  It could take an hour to propagate transactions and the resulting network throughput would be unchanged because the network doesn't stop and wait while the transactions are being propagated. If it did, it would add an hour until you saw a confirmation, but the number of confirmed transactions per hour would not be changed.

Imagine you had an infinite number of dump trucks to use in hauling gravel from one city to another, 24 hours a day 365 days a year. Each truck carries 1 ton of payload and every 5 minutes a full truck leaves.  During week you will carry 2016 tons of gravel between the cities. It does not matter if the cities are 1 hour apart or 5 hours apart:  Changing latency does not change throughput for non-serialized operations.

In Bitcoin the latency is relevant-- not because of throughput reasons, but because of things like people caring about their transactions being confirmed sooner rather than later.  So long as the TX propagation delays are very small compared to the block finding interval they don't matter much in terms of the user's experience, so it's fine to trade off some small latency for other considerations like reduced bandwidth or increased privacy-- which is something Bitcoin currently does and has always done since the very first version (though the details of the trade-off have changed over time).

There are other minor considerations that make tx propagation delays matter some, but they're all of a sort where they don't matter much so long as they're significantly less than other delays (like block template updating delays).

I don't think that this example fully encompasses the problem.  

The transaction propagation time, although inconvenient, is not the limitation on bitcoin specifically.  The problem limiting the throughput of Bitcoin is the average bandwidth available to the Bitcoin nodes and how much bandwidth is needed to handle a given number of TPS. The reason that latency becomes important in a multi-hop system is that it acts to reduce the effective bandwidth of the network.  

Adding additional propagation delays needs to be compensated by decrease in bandwidth. Minisketch clearly accomplishes this. However, there is a point which is optimal in that the total throughput of the system will be maximized with delay X given the amount of bandwidth reduction Y that the delay enables.

This is a well study trade-off that effects the throughput of TOR given the latency of nodes as well as the prescribed number of hops. I think that some of this work may be helpful in understanding how to optimize Minisketch.

In the case of Bitcoin, you may actually realize that substantially longer delays will be the point at which total network throughput is maximized or rather how the bandwidth is minimized given the current blocksize limit.  

This again gets a little bit tricky though because delaying relaying by say a minute could substantively increase TPS or decrease bandwidth but would cause some form of selfish mining.
Pages: [1] 2 3 4 5 »
Powered by MySQL Powered by PHP Powered by SMF 1.1.19 | SMF © 2006-2009, Simple Machines Valid XHTML 1.0! Valid CSS!