Bitcoin Forum
June 21, 2024, 12:15:27 PM *
News: Latest Bitcoin Core release: 27.0 [Torrent]
 
   Home   Help Search Login Register More  
Pages: « 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 [22] 23 24 25 26 »
  Print  
Author Topic: How a floating blocksize limit inevitably leads towards centralization  (Read 71521 times)
MoonShadow
Legendary
*
Offline Offline

Activity: 1708
Merit: 1007



View Profile
February 23, 2013, 01:13:17 AM
 #421

While this number isn't remotely what it would have to be to handle the kind of transactions normally done in cash, we don't really want to handle every single transaction.

Why not? Because it would require 25GB blocks (to do 15,000,000,000 transactions per day) and that seems completely impossible right now?

No, but because of the network effects of raising that limit.  While not a certainty, raising the limit theoreticly permits the overall resource consumption of bitcoin's p2p network to increase exponentually.  And it's this increase in resource consumption that would drive out, not just the hobbyist with a full client, but small mining operations that are otherwise profitable but marginal.  I'm not afraid of some degree of this if I'm convinced that raising that limit is a net benefit for Bitcoin, but I'm not convinced that this is the case.  Transaction times, delay time and average transaction fees are far from the only consideration.  This is a very complex system, and even a small increase in the hard limit could cause some unintended consequences; but a small increase does not justify the potential risk of a hard code fork fight.

"The powers of financial capitalism had another far-reaching aim, nothing less than to create a world system of financial control in private hands able to dominate the political system of each country and the economy of the world as a whole. This system was to be controlled in a feudalist fashion by the central banks of the world acting in concert, by secret agreements arrived at in frequent meetings and conferences. The apex of the systems was to be the Bank for International Settlements in Basel, Switzerland, a private bank owned and controlled by the world's central banks which were themselves private corporations. Each central bank...sought to dominate its government by its ability to control Treasury loans, to manipulate foreign exchanges, to influence the level of economic activity in the country, and to influence cooperative politicians by subsequent economic rewards in the business world."

- Carroll Quigley, CFR member, mentor to Bill Clinton, from 'Tragedy And Hope'
BradZimdack
Member
**
Offline Offline

Activity: 87
Merit: 12


View Profile
February 23, 2013, 02:09:25 AM
 #422


I think you're on the right track with that idea.  Here's a metaphor that might be related to this whole issue:

If demand for wheat increases, the price of wheat increases too because, of course, there's a limited supply of wheat.  The increased price encourages farmers to grow more wheat and make more investment in tractors and equipment.  Higher wheat prices encourage more people to become wheat farmers when they see that the wheat business has become more lucrative.  With the increased supply of wheat that results from its increased production, consumer prices begin to fall.  Overall, the economy ends up with a much larger supply of wheat at the most competitive price possible.  Production is maximized, farmers' profits are maximized, and consumer prices are minimized.

I know the block size issue is more complex due to things like hard drive space, but the relationships between supply, demand, production, and price seem important to the problem at hand.
markm
Legendary
*
Offline Offline

Activity: 2940
Merit: 1090



View Profile WWW
February 23, 2013, 06:31:39 AM
 #423

Fine then, lets look at how much it will cost to do specific upgrade scenarios, since all costs ultimately end up coming out of the pockets of users. (Right? Users ultimately pay for everything?)

One scenario is, we upgrade absolutely every full node that currently runs 24/7.

Already we are driving out all the people who like to freeload on the resources of decentralised networks without giving back. They fire up a node, download the movie they want or do the bitcoin payment they want - in general, get something they want from the system - then they shut down the app so that their machine doesn't participate in providing the service.

So for a totally ballpark, making handwaving assumptions back of a napkin guess, lets guess maybe ten times the block size would mean ten times the current costs of hardware and bandwidth. "We" (the users) provide upgrades to all the current 24/7 full nodes, increase the max block size by ten and we're done.

Of course the beancounters will point out "we" can save a lot of cost by building one global datacentre in Iceland, increasing the max block size to oh heck why quibble over how many gigabytes lets just do this right and say at least a terrabyte. Current nodes are just the fossils of a dead era, "we" do this once, do it right, and we're done. For redundance maybe "we" build actually seven centres, one per continent, instead of just one wherever electricity and cooling is cheapest.


Two details:

One, is the fixed cost really directly linear to the max block size? Or is it really more like exponential?

Two, experiment with various s/we/someoneelse/ substitutions? (Google? The government? Big Brother? Joe Monopolist? The Russian Mob? Etc.)

(I thought I had two details, but come time to type the second couldn't remember what I had had in mind, so made up the above detail Two as a spacefiller.)

-MarkM-

Browser-launched Crossfire client now online (select CrossCiv server for Galactic  Milieu)
Free website hosting with PHP, MySQL etc: http://hosting.knotwork.com/
MoonShadow
Legendary
*
Offline Offline

Activity: 1708
Merit: 1007



View Profile
February 23, 2013, 07:08:15 AM
 #424



One, is the fixed cost really directly linear to the max block size? Or is it really more like exponential?


It's definately exponential, even if you just consider network traffic.

Let me paint a strawman to burn...

The network topology, as it exists, is largely random.  A full client is expected to have at least 8 active connections, although three would suffice for the health of the network and only one works for the node itself.  Many can, and do, have more connections than 8.  Many of the 'fallback' nodes have hundreds, perhaps thousands, of connections.  So here is the problem; when a block is found, it's in every honest nodes' own interest that said block propagates to the edges of the network as quickly as possible.  A recently as last year, it wouldn't normally take more than 10 seconds to saturate the network, but Satoshi chose a 10 minute interval, in part, to reduce the number of network splits and orphaned blocks due to propagation times, because he presumed that the process that nodes go through (verify every transaction, verify the block, broadcast a copy to each peer that does not have it) would grow as blocks became larger.  The verification delays grow exponentially with the number of transactions simply because each node must perform it's own series of checks before forwarding said block. 

Now we get to the traffic problem...

Once the block has been found by a miner, that miner very much wants to send it to every peer he has as quickly as he can.  So he has to upload that block, whatever it's size, at least 8 times because it's impossible that any of his peers already has it.  Then, after each of those peers has performed the checks, he then proceeds to upload that block to all of his peers save one (the one it got it from) because at this point it is very unlikely that any of his other connected peers already has the block.  These nodes also have a vested interest in getting this block out quickly, once they have confirmed it's a valid block, if they are miners themselves because they don't want to end up mining on a minority block.  Thus, as you can see, the largest miners will always form the center of the network, because they all have a vested interest in being very well connected to each other.  So they also have a vested interest in peer connections with other miners of their same caliber, but not so much with the rest of the network, since sending to lesser miners or non-mining full clients will slow down their ability to transmit said block to those peers that increase the odds that said block will be the winner in a blockchain split.  This effect just gets worse as the size of the block increases, no matter how fast the connection that the mining nodes may have.

Well, this process continues across the p2p network until half of the network already has the block, at which point each node, on average, only finds that half of his peers still need the block.  The problem is that, for small miners, the propagation of a block that is not their own offers them somewhat less advantage than it does for the big miners at the center of the network, because those big miners have already been mining against that new block for many milliseconds to several seconds before the marginal miner even received it.  To make matters worse, the marginal miner tends to have a slower connection than the majors.  So even though it's increasingly likely that his peers don't need the block at all; as the size of the block increases, the incentives for the marginal miner to fail to forward that block, or otherwise pretend it doesn't have it, increase at a rate that is greater than linear.  This is actually true for all of the mining nodes, regardless of their position in the network relative to the center, but those who are closest to the center are always those with the greatest incentives to propagate, so the disincentives increase at a rate closer to linear the closer one is in the network to the center.

Now there is a catch, most of the nodes don't know where they are in the network.  It's not that they can't determine this, most owners simply don't bother to figure this out, nor do they deliberately establish peer connections with the centermost nodes.  However, those centermost nodes almost certainly do have favored connections directly to each other, and they were established deliberately by the ownership for these very reasons.

"The powers of financial capitalism had another far-reaching aim, nothing less than to create a world system of financial control in private hands able to dominate the political system of each country and the economy of the world as a whole. This system was to be controlled in a feudalist fashion by the central banks of the world acting in concert, by secret agreements arrived at in frequent meetings and conferences. The apex of the systems was to be the Bank for International Settlements in Basel, Switzerland, a private bank owned and controlled by the world's central banks which were themselves private corporations. Each central bank...sought to dominate its government by its ability to control Treasury loans, to manipulate foreign exchanges, to influence the level of economic activity in the country, and to influence cooperative politicians by subsequent economic rewards in the business world."

- Carroll Quigley, CFR member, mentor to Bill Clinton, from 'Tragedy And Hope'
twolifeinexile
Full Member
***
Offline Offline

Activity: 154
Merit: 100



View Profile
February 23, 2013, 08:11:19 AM
 #425

One, is the fixed cost really directly linear to the max block size? Or is it really more like exponential?
It's definately exponential, even if you just consider network traffic.
Let me paint a strawman to burn...
The network topology, as it exists, is largely random.  A full client is expected to have at least 8 active connections, although three would suffice for the health of the network and only one works for the node itself.  Many can, and do, have more connections than 8.  Many of the 'fallback' nodes have hundreds, perhaps thousands, of connections.  So here is the problem; when a block is found, it's in every honest nodes' own interest that said block propagates to the edges of the network as quickly as possible.  A recently as last year, it wouldn't normally take more than 10 seconds to saturate the network, but Satoshi chose a 10 minute interval, in part, to reduce the number of network splits and orphaned blocks due to propagation times, because he presumed that the process that nodes go through (verify every transaction, verify the block, broadcast a copy to each peer that does not have it) would grow as blocks became larger.  The verification delays grow exponentially with the number of transactions simply because each node must perform it's own series of checks before forwarding said block. 

Now we get to the traffic problem...

Once the block has been found by a miner, that miner very much wants to send it to every peer he has as quickly as he can.  So he has to upload that block, whatever it's size, at least 8 times because it's impossible that any of his peers already has it.  Then, after each of those peers has performed the checks, he then proceeds to upload that block to all of his peers save one (the one it got it from) because at this point it is very unlikely that any of his other connected peers already has the block.  These nodes also have a vested interest in getting this block out quickly, once they have confirmed it's a valid block, if they are miners themselves because they don't want to end up mining on a minority block.  Thus, as you can see, the largest miners will always form the center of the network, because they all have a vested interest in being very well connected to each other.  So they also have a vested interest in peer connections with other miners of their same caliber, but not so much with the rest of the network, since sending to lesser miners or non-mining full clients will slow down their ability to transmit said block to those peers that increase the odds that said block will be the winner in a blockchain split.  This effect just gets worse as the size of the block increases, no matter how fast the connection that the mining nodes may have.

Well, this process continues across the p2p network until half of the network already has the block, at which point each node, on average, only finds that half of his peers still need the block.  The problem is that, for small miners, the propagation of a block that is not their own offers them somewhat less advantage than it does for the big miners at the center of the network, because those big miners have already been mining against that new block for many milliseconds to several seconds before the marginal miner even received it.  To make matters worse, the marginal miner tends to have a slower connection than the majors.  So even though it's increasingly likely that his peers don't need the block at all; as the size of the block increases, the incentives for the marginal miner to fail to forward that block, or otherwise pretend it doesn't have it, increase at a rate that is greater than linear.  This is actually true for all of the mining nodes, regardless of their position in the network relative to the center, but those who are closest to the center are always those with the greatest incentives to propagate, so the disincentives increase at a rate closer to linear the closer one is in the network to the center.

Now there is a catch, most of the nodes don't know where they are in the network.  It's not that they can't determine this, most owners simply don't bother to figure this out, nor do they deliberately establish peer connections with the centermost nodes.  However, those centermost nodes almost certainly do have favored connections directly to each other, and they were established deliberately by the ownership for these very reasons.
Is it possible to test this theory in the testnet?
MoonShadow
Legendary
*
Offline Offline

Activity: 1708
Merit: 1007



View Profile
February 23, 2013, 01:34:00 PM
 #426

One, is the fixed cost really directly linear to the max block size? Or is it really more like exponential?
It's definately exponential, even if you just consider network traffic.
Let me paint a strawman to burn...
The network topology, as it exists, is largely random.  A full client is expected to have at least 8 active connections, although three would suffice for the health of the network and only one works for the node itself.  Many can, and do, have more connections than 8.  Many of the 'fallback' nodes have hundreds, perhaps thousands, of connections.  So here is the problem; when a block is found, it's in every honest nodes' own interest that said block propagates to the edges of the network as quickly as possible.  A recently as last year, it wouldn't normally take more than 10 seconds to saturate the network, but Satoshi chose a 10 minute interval, in part, to reduce the number of network splits and orphaned blocks due to propagation times, because he presumed that the process that nodes go through (verify every transaction, verify the block, broadcast a copy to each peer that does not have it) would grow as blocks became larger.  The verification delays grow exponentially with the number of transactions simply because each node must perform it's own series of checks before forwarding said block. 

Now we get to the traffic problem...

Once the block has been found by a miner, that miner very much wants to send it to every peer he has as quickly as he can.  So he has to upload that block, whatever it's size, at least 8 times because it's impossible that any of his peers already has it.  Then, after each of those peers has performed the checks, he then proceeds to upload that block to all of his peers save one (the one it got it from) because at this point it is very unlikely that any of his other connected peers already has the block.  These nodes also have a vested interest in getting this block out quickly, once they have confirmed it's a valid block, if they are miners themselves because they don't want to end up mining on a minority block.  Thus, as you can see, the largest miners will always form the center of the network, because they all have a vested interest in being very well connected to each other.  So they also have a vested interest in peer connections with other miners of their same caliber, but not so much with the rest of the network, since sending to lesser miners or non-mining full clients will slow down their ability to transmit said block to those peers that increase the odds that said block will be the winner in a blockchain split.  This effect just gets worse as the size of the block increases, no matter how fast the connection that the mining nodes may have.

Well, this process continues across the p2p network until half of the network already has the block, at which point each node, on average, only finds that half of his peers still need the block.  The problem is that, for small miners, the propagation of a block that is not their own offers them somewhat less advantage than it does for the big miners at the center of the network, because those big miners have already been mining against that new block for many milliseconds to several seconds before the marginal miner even received it.  To make matters worse, the marginal miner tends to have a slower connection than the majors.  So even though it's increasingly likely that his peers don't need the block at all; as the size of the block increases, the incentives for the marginal miner to fail to forward that block, or otherwise pretend it doesn't have it, increase at a rate that is greater than linear.  This is actually true for all of the mining nodes, regardless of their position in the network relative to the center, but those who are closest to the center are always those with the greatest incentives to propagate, so the disincentives increase at a rate closer to linear the closer one is in the network to the center.

Now there is a catch, most of the nodes don't know where they are in the network.  It's not that they can't determine this, most owners simply don't bother to figure this out, nor do they deliberately establish peer connections with the centermost nodes.  However, those centermost nodes almost certainly do have favored connections directly to each other, and they were established deliberately by the ownership for these very reasons.
Is it possible to test this theory in the testnet?

I would doubt it, since there is no economic dynamic on testnet.

"The powers of financial capitalism had another far-reaching aim, nothing less than to create a world system of financial control in private hands able to dominate the political system of each country and the economy of the world as a whole. This system was to be controlled in a feudalist fashion by the central banks of the world acting in concert, by secret agreements arrived at in frequent meetings and conferences. The apex of the systems was to be the Bank for International Settlements in Basel, Switzerland, a private bank owned and controlled by the world's central banks which were themselves private corporations. Each central bank...sought to dominate its government by its ability to control Treasury loans, to manipulate foreign exchanges, to influence the level of economic activity in the country, and to influence cooperative politicians by subsequent economic rewards in the business world."

- Carroll Quigley, CFR member, mentor to Bill Clinton, from 'Tragedy And Hope'
Serith
Sr. Member
****
Offline Offline

Activity: 269
Merit: 250


View Profile
February 23, 2013, 02:55:21 PM
 #427

The verification delays grow exponentially with the number of transactions simply because each node must perform it's own series of checks before forwarding said block.

Wrong, because the number of nodes getting involved in verification of specific block grows exponentially as well, so the relationship between number of transactions and propagation time is linear.

As an analogy think about how a bacteria multiplies, at every step a size of the colony increased by factor of 2 until it's reached lets say 1024, if suddenly it requires twice more time to multiple then it takes only twice longer to reach the size of 1024 because the number of multiplication steps is still the same.
paraipan
In memoriam
Legendary
*
Offline Offline

Activity: 924
Merit: 1004


Firstbits: 1pirata


View Profile WWW
February 23, 2013, 03:04:32 PM
 #428

The verification delays grow exponentially with the number of transactions simply because each node must perform it's own series of checks before forwarding said block.

Wrong, because the number of nodes getting involved in verification of specific block grows exponentially as well, so the relationship between number of transactions and propagation time is linear.

As an analogy think about how a bacteria multiplies, at every step a size of the colony increased by factor of 2 until it's reached lets say 1024, if suddenly it requires twice more time to multiple then it takes only twice longer to reach the size of 1024 because the number of multiplication steps is still the same.

MoonShadow is right, every node performs the checks on its own before forwarding the block to other peers, please don't make useless analogies Serith.

BTCitcoin: An Idea Worth Saving - Q&A with bitcoins on rugatu.com - Check my rep
MoonShadow
Legendary
*
Offline Offline

Activity: 1708
Merit: 1007



View Profile
February 23, 2013, 03:14:40 PM
 #429

The verification delays grow exponentially with the number of transactions simply because each node must perform it's own series of checks before forwarding said block.

Wrong, because the number of nodes getting involved in verification of specific block grows exponentially as well, so the relationship between number of transactions and propagation time is linear.

As an analogy think about how a bacteria multiplies, at every step a size of the colony increased by factor of 2 until it's reached lets say 1024, if suddenly it requires twice more time to multiple then it takes only twice longer to reach the size of 1024 because the number of multiplication steps is still the same.

If block verification were a distributable process, you'd be correct, but it isn't.  At least it's not now, and I don't know how it could be done.  One thing that could be altered to speed up propagation is for some nodes to have trusted peers, wherein if they receive a block from that peer, they re-broadcast that block first then do their checks.  But if this was the default behavior, the network could be DDOS'ed with false blocks.

"The powers of financial capitalism had another far-reaching aim, nothing less than to create a world system of financial control in private hands able to dominate the political system of each country and the economy of the world as a whole. This system was to be controlled in a feudalist fashion by the central banks of the world acting in concert, by secret agreements arrived at in frequent meetings and conferences. The apex of the systems was to be the Bank for International Settlements in Basel, Switzerland, a private bank owned and controlled by the world's central banks which were themselves private corporations. Each central bank...sought to dominate its government by its ability to control Treasury loans, to manipulate foreign exchanges, to influence the level of economic activity in the country, and to influence cooperative politicians by subsequent economic rewards in the business world."

- Carroll Quigley, CFR member, mentor to Bill Clinton, from 'Tragedy And Hope'
markm
Legendary
*
Offline Offline

Activity: 2940
Merit: 1090



View Profile WWW
February 23, 2013, 03:20:38 PM
 #430

Still, the number of hops (nodes) a block has to be checked by in order for the network to be flooded / saturated with that block is linear to the average radius of the net, in hops, from the node that solves the block, isn't it?

-MarkM-

Browser-launched Crossfire client now online (select CrossCiv server for Galactic  Milieu)
Free website hosting with PHP, MySQL etc: http://hosting.knotwork.com/
MoonShadow
Legendary
*
Offline Offline

Activity: 1708
Merit: 1007



View Profile
February 23, 2013, 03:26:23 PM
 #431

Still, the number of hops (nodes) a block has to be checked by in order for the network to be flooded / saturated with that block is linear to the average radius of the net, in hops, from the node that solves the block, isn't it?

-MarkM-


Hmm, more or less.  But my point is that, as the blocksize increases, the propagation delays increase due to more than one delay metric increasing.  We have the check times, we have the individual p2p transmission times, and we have the number of hops.

One thing that I didn't make clear is that I also suspect, but cannot prove, the number of hops in the network will also tend to increase in average number.  Partly because of an increase in active nodes, which is an artifact of a growing economy, but also somewhat due to the increase in blocksize.  I suspect that as network resource demand increases, some full nodes will deliberately choose to limit their peers and their dedicated bandwidth, functionally moving themselves towards the edge of the network.

"The powers of financial capitalism had another far-reaching aim, nothing less than to create a world system of financial control in private hands able to dominate the political system of each country and the economy of the world as a whole. This system was to be controlled in a feudalist fashion by the central banks of the world acting in concert, by secret agreements arrived at in frequent meetings and conferences. The apex of the systems was to be the Bank for International Settlements in Basel, Switzerland, a private bank owned and controlled by the world's central banks which were themselves private corporations. Each central bank...sought to dominate its government by its ability to control Treasury loans, to manipulate foreign exchanges, to influence the level of economic activity in the country, and to influence cooperative politicians by subsequent economic rewards in the business world."

- Carroll Quigley, CFR member, mentor to Bill Clinton, from 'Tragedy And Hope'
Serith
Sr. Member
****
Offline Offline

Activity: 269
Merit: 250


View Profile
February 24, 2013, 01:46:41 AM
 #432

The verification delays grow exponentially with the number of transactions simply because each node must perform it's own series of checks before forwarding said block.

Wrong, because the number of nodes getting involved in verification of specific block grows exponentially as well, so the relationship between number of transactions and propagation time is linear.

As an analogy think about how a bacteria multiplies, at every step a size of the colony increased by factor of 2 until it's reached lets say 1024, if suddenly it requires twice more time to multiple then it takes only twice longer to reach the size of 1024 because the number of multiplication steps is still the same.

If block verification were a distributable process, you'd be correct, but it isn't.  At least it's not now, and I don't know how it could be done.  One thing that could be altered to speed up propagation is for some nodes to have trusted peers, wherein if they receive a block from that peer, they re-broadcast that block first then do their checks.  But if this was the default behavior, the network could be DDOS'ed with false blocks.

Lets say you have 1025 full nodes on the network, to keep the example simple we will have those nodes connected in a form of binary tree where a block starts propagating from it's root. Lets say it takes 1 minutes for a node to verify a block and relay it to its children, then it will take 10 minutes for 1024 nodes ( I exclude the root from verification time) to verify the block, in another words it takes 10 minutes for a block to get propagated and verified by the network.

Now lets have a bigger block that takes twice more time to verify, then it will take 20 minutes for 1024 nodes to verify the block, in another words it will take twice more minutes for a block to get propagated and verified by the network. Therefore the relationship between verification time of block and propagation delay it's causing is linear, e.g. if it takes thrice more time to verify a block then, in the simplified example, it will take thrice more time to propagate it through the network.
MoonShadow
Legendary
*
Offline Offline

Activity: 1708
Merit: 1007



View Profile
February 24, 2013, 05:31:31 AM
 #433

The verification delays grow exponentially with the number of transactions simply because each node must perform it's own series of checks before forwarding said block.

Wrong, because the number of nodes getting involved in verification of specific block grows exponentially as well, so the relationship between number of transactions and propagation time is linear.

As an analogy think about how a bacteria multiplies, at every step a size of the colony increased by factor of 2 until it's reached lets say 1024, if suddenly it requires twice more time to multiple then it takes only twice longer to reach the size of 1024 because the number of multiplication steps is still the same.

If block verification were a distributable process, you'd be correct, but it isn't.  At least it's not now, and I don't know how it could be done.  One thing that could be altered to speed up propagation is for some nodes to have trusted peers, wherein if they receive a block from that peer, they re-broadcast that block first then do their checks.  But if this was the default behavior, the network could be DDOS'ed with false blocks.

Lets say you have 1025 full nodes on the network, to keep the example simple we will have those nodes connected in a form of binary tree where a block starts propagating from it's root. Lets say it takes 1 minutes for a node to verify a block and relay it to its children, then it will take 10 minutes for 1024 nodes ( I exclude the root from verification time) to verify the block, in another words it takes 10 minutes for a block to get propagated and verified by the network.

Now lets have a bigger block that takes twice more time to verify, then it will take 20 minutes for 1024 nodes to verify the block, in another words it will take twice more minutes for a block to get propagated and verified by the network. Therefore the relationship between verification time of block and propagation delay it's causing is linear, e.g. if it takes thrice more time to verify a block then, in the simplified example, it will take thrice more time to propagate it through the network.

Your example does not relate to the network.  It's not a binary tree, and doubling the size of the block does not simply double the verification times.  While it might actually be close enough to ignore in practice, the increase in the actual number of transactions in the myrkle tree makes the tree itself more complex, with more binary branches thus more layers to the myrkle tree.  This would imply a greater complexity to the verification process on it's own.  For example, a simple block with only four transactions will have the TxID hashes for those four transactions, plus two hashes for their paired myrkel tree branches, and a final hash that pairs those two together, and that is the myrkle root, which is then included into the block header.  Moving beyond four transactions, however, creates another layer in the binary tree; and then another after 8 transactions, and another after 16 transactions.  Once your into a block with several thousand transactions, your myrkel tree is going to have several layers, and only the bottom of the tree are actual transaction ID's; all the rest are artifacts of the myrkel tree, which every verification of a block must replicate.  The binary myrkel tree within a block is a very efficient way to verifiablely store the data in a way that it can be trimmed later, but it's most certainly not a linear progression.  More complex transaction types, such as contracts or send-to-many, has a similar effect; as the process for verifying transactions is not as simple as a single hash.  Portions of the entire transaction must be excluded from the signing hash that each input address must add to the end of each transaction, and then be re-included as the verification process marches down the list of inputs.  And that is just an example of what I, personally, know about the process.  Logically, the increase in clock cycles for larger, and more complex, blocks and transactions must increase the propagation times at least linearly; and it's very likely to be greater than linear.  Which is, by defintion, exponential.  It may or may not matter in practice, as that exponetial growth may be so small as to not really effect the outcomes, but it is very likely present; and if so, the larger and more complex that blocks are permitted to grow the more likely said growth will metasize to a noticable level.

"The powers of financial capitalism had another far-reaching aim, nothing less than to create a world system of financial control in private hands able to dominate the political system of each country and the economy of the world as a whole. This system was to be controlled in a feudalist fashion by the central banks of the world acting in concert, by secret agreements arrived at in frequent meetings and conferences. The apex of the systems was to be the Bank for International Settlements in Basel, Switzerland, a private bank owned and controlled by the world's central banks which were themselves private corporations. Each central bank...sought to dominate its government by its ability to control Treasury loans, to manipulate foreign exchanges, to influence the level of economic activity in the country, and to influence cooperative politicians by subsequent economic rewards in the business world."

- Carroll Quigley, CFR member, mentor to Bill Clinton, from 'Tragedy And Hope'
solex
Legendary
*
Offline Offline

Activity: 1078
Merit: 1002


100 satoshis -> ISO code


View Profile
February 24, 2013, 06:40:51 AM
 #434

I'm pretty sure that the 250KB limit has never been broken to date.

block 187901 499,245 bytes
block 191652 499,254 bytes
block 182862 499,261 bytes
block 192961 499,262 bytes
block 194270 499,273 bytes

These are the biggest 5 blocks up to the checkpoint at block 216,116.  Interesting that they're all 499,2xx bytes long, as if whoever is mining them doesn't want to get too close to 500,000 bytes long.

I understand that at least one miner has their own soft-limit, probably Eligius and probably at 500kb.

I take that back because these blocks are version 1 and Eligius is supposedly producing all version 2 (http://blockorigin.pfoe.be/top.php)

However, I have extracted the transaction counts and they average 1190 each. Looking at a bunch of blocks maxing out at 250Kb they are in the region of 600 transactions each, which is to be expected. Obviously, there is a block header overhead in all of them. But this does mean that the 1Mb blocks will be saturated when they carry about 2400 transactions. This ignores the fact that some blocks are virtually empty as a few miners seem not to care about including many transactions.

So 2400 transactions per block * 144 blocks per day = 345,600 transactions per day or Bitcoin's maximum sustained throughput is just 4 transactions per second.
This is even more anemic than the oft-quoted 7 tps!


hazek
Legendary
*
Offline Offline

Activity: 1078
Merit: 1002


View Profile
February 24, 2013, 09:26:29 AM
 #435

I'm pretty sure that the 250KB limit has never been broken to date.

block 187901 499,245 bytes
block 191652 499,254 bytes
block 182862 499,261 bytes
block 192961 499,262 bytes
block 194270 499,273 bytes

These are the biggest 5 blocks up to the checkpoint at block 216,116.  Interesting that they're all 499,2xx bytes long, as if whoever is mining them doesn't want to get too close to 500,000 bytes long.

I understand that at least one miner has their own soft-limit, probably Eligius and probably at 500kb.

I take that back because these blocks are version 1 and Eligius is supposedly producing all version 2 (http://blockorigin.pfoe.be/top.php)

However, I have extracted the transaction counts and they average 1190 each. Looking at a bunch of blocks maxing out at 250Kb they are in the region of 600 transactions each, which is to be expected. Obviously, there is a block header overhead in all of them. But this does mean that the 1Mb blocks will be saturated when they carry about 2400 transactions. This ignores the fact that some blocks are virtually empty as a few miners seem not to care about including many transactions.

So 2400 transactions per block * 144 blocks per day = 345,600 transactions per day or Bitcoin's maximum sustained throughput is just 4 transactions per second.
This is even more anemic than the oft-quoted 7 tps!

Excellent. Someone finally quoted some real numbers instead of theoretical maximums, the picture is a bit clearer now, thanks!

My personality type: INTJ - please forgive my weaknesses (Not naturally in tune with others feelings; may be insensitive at times, tend to respond to conflict with logic and reason, tend to believe I'm always right)

If however you enjoyed my post: 15j781DjuJeVsZgYbDVt2NZsGrWKRWFHpp
MoonShadow
Legendary
*
Offline Offline

Activity: 1708
Merit: 1007



View Profile
February 24, 2013, 12:37:11 PM
 #436

I'm pretty sure that the 250KB limit has never been broken to date.

block 187901 499,245 bytes
block 191652 499,254 bytes
block 182862 499,261 bytes
block 192961 499,262 bytes
block 194270 499,273 bytes

These are the biggest 5 blocks up to the checkpoint at block 216,116.  Interesting that they're all 499,2xx bytes long, as if whoever is mining them doesn't want to get too close to 500,000 bytes long.

I understand that at least one miner has their own soft-limit, probably Eligius and probably at 500kb.

I take that back because these blocks are version 1 and Eligius is supposedly producing all version 2 (http://blockorigin.pfoe.be/top.php)

However, I have extracted the transaction counts and they average 1190 each. Looking at a bunch of blocks maxing out at 250Kb they are in the region of 600 transactions each, which is to be expected. Obviously, there is a block header overhead in all of them. But this does mean that the 1Mb blocks will be saturated when they carry about 2400 transactions. This ignores the fact that some blocks are virtually empty as a few miners seem not to care about including many transactions.

So 2400 transactions per block * 144 blocks per day = 345,600 transactions per day or Bitcoin's maximum sustained throughput is just 4 transactions per second.
This is even more anemic than the oft-quoted 7 tps!



That is truly awful news.

"The powers of financial capitalism had another far-reaching aim, nothing less than to create a world system of financial control in private hands able to dominate the political system of each country and the economy of the world as a whole. This system was to be controlled in a feudalist fashion by the central banks of the world acting in concert, by secret agreements arrived at in frequent meetings and conferences. The apex of the systems was to be the Bank for International Settlements in Basel, Switzerland, a private bank owned and controlled by the world's central banks which were themselves private corporations. Each central bank...sought to dominate its government by its ability to control Treasury loans, to manipulate foreign exchanges, to influence the level of economic activity in the country, and to influence cooperative politicians by subsequent economic rewards in the business world."

- Carroll Quigley, CFR member, mentor to Bill Clinton, from 'Tragedy And Hope'
markm
Legendary
*
Offline Offline

Activity: 2940
Merit: 1090



View Profile WWW
February 24, 2013, 12:40:08 PM
 #437

That is truly awful news.

Why? What is the proportianality of exchange rates to transaction rates?

Exchange rate is approx. $30 per coin currently at current number of transactions per second. What value of exchange rate will it take to drive transaction rate up to 4 per second?

-MarkM-

Browser-launched Crossfire client now online (select CrossCiv server for Galactic  Milieu)
Free website hosting with PHP, MySQL etc: http://hosting.knotwork.com/
MoonShadow
Legendary
*
Offline Offline

Activity: 1708
Merit: 1007



View Profile
February 24, 2013, 12:43:16 PM
 #438

That is truly awful news.

Why? What is the proportianality of exchange rates to transaction rates?

Exchange rate is approx. $30 per coin currently at current number of transactions per second. What value of exchange rate will it take to drive transaction rate up to 4 per second?

-MarkM-


The BTCt fiat exchange rates have no direct influence on the transaction rate.  Only the size of the economy has a real influence on the transaction rate, but severe limitations on the transaction rate might be a limitation on the size of the economy.

"The powers of financial capitalism had another far-reaching aim, nothing less than to create a world system of financial control in private hands able to dominate the political system of each country and the economy of the world as a whole. This system was to be controlled in a feudalist fashion by the central banks of the world acting in concert, by secret agreements arrived at in frequent meetings and conferences. The apex of the systems was to be the Bank for International Settlements in Basel, Switzerland, a private bank owned and controlled by the world's central banks which were themselves private corporations. Each central bank...sought to dominate its government by its ability to control Treasury loans, to manipulate foreign exchanges, to influence the level of economic activity in the country, and to influence cooperative politicians by subsequent economic rewards in the business world."

- Carroll Quigley, CFR member, mentor to Bill Clinton, from 'Tragedy And Hope'
markm
Legendary
*
Offline Offline

Activity: 2940
Merit: 1090



View Profile WWW
February 24, 2013, 12:47:04 PM
 #439

And severe focus on trivial transactions worth less that it cost to make them possible in the first place also might limit the economy.

If it comes down to transacting houses and cars or lollipops and chewing gum lets throw some lollipops and chewing gum in the glove compartments and pantries and stick to transacting cars and houses.

There IS anyway some direct relation: not enough total value of all 21 million coins means not enough miner savings to buy better infrastructure. So I challenge your assertion or want to see the formula known as "no direct".

Halve the exchange rate and you halve, directly, miners' capital savings.

Divide exchange rate by ten and you decimate the liquid production-capital of the economy.

Multiply it by ten ... would that multiply the liquid production-capital of the economy? By how much?

-MarkM-

Browser-launched Crossfire client now online (select CrossCiv server for Galactic  Milieu)
Free website hosting with PHP, MySQL etc: http://hosting.knotwork.com/
TierNolan
Legendary
*
Offline Offline

Activity: 1232
Merit: 1084


View Profile
February 24, 2013, 02:27:31 PM
 #440

Have you encountered the term "merged mining" yet?

Yes, and the lower chains would almost certainly be merge minded.  They would have their own proof of work totals.  It would be worth making this easier.

The objective of the proposal was to keep things tracked by a single main/root parent chain.

The proof of work is separate, if you trust the root chain, you can trust the lower chains (but less so as you move downwards).

Having alt chains means exchange rate risk.  Ofc, my suggestion also has some exchange rate risk, if one of the chains is badly verified, but somehow survives for 120 confirmations.

1LxbG5cKXzTwZg9mjL3gaRE835uNQEteWF
Pages: « 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 [22] 23 24 25 26 »
  Print  
 
Jump to:  

Powered by MySQL Powered by PHP Powered by SMF 1.1.19 | SMF © 2006-2009, Simple Machines Valid XHTML 1.0! Valid CSS!