Bitcoin Forum
November 01, 2024, 06:17:49 PM *
News: Bitcoin Pumpkin Carving Contest
 
   Home   Help Search Login Register More  
Pages: « 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 [26]
  Print  
Author Topic: How a floating blocksize limit inevitably leads towards centralization  (Read 71576 times)
solex
Legendary
*
Offline Offline

Activity: 1078
Merit: 1006


100 satoshis -> ISO code


View Profile
February 28, 2013, 04:56:36 AM
 #501


Before, I support the change to protocol in a carefully planned way to improve the end user experience, but recently I discovered that you can double spend on both original chain and the new chain after a hard fork, then it means the promise of prevent double-spending and limited supply is all broken, that is much severe than I thought


That simply means that, after a very short period of chaos post-fork, simple economic incentives would VERY quickly force a consensus on one of the chains.  The chaos would not be permitted to continue, by anyone, whichever side they personally want to "win", as it would cost them too much.

Or there could be two chains - each with its own pros and cons.  While all us early investors would be able to spend on each chain, it should function like a stock split where though we have twice as many 'shares' each one is only worth 50% of the original value.  It could be 90/10 or 80/20 though, or any two percentages summing to 1.  If you wanted to favor one chain over the other, you could sell your coins on one and buy coins in your preferred chain.

No. It wouldn't happen. As soon as a fork occurred the fx rate of new coins mined on the "weaker" chain would collapse to a few cents as all the major businesses, websites, miners, exchanges would automatically side with the "stronger" chain. If anyone thinks that they can double-spend bitcoins on different websites, after a few hours, (some accepting one fork, some the other) then they are living in dreamland.

Cubic Earth
Legendary
*
Offline Offline

Activity: 1176
Merit: 1020



View Profile
February 28, 2013, 05:55:54 AM
 #502

No. It wouldn't happen. As soon as a fork occurred the fx rate of new coins mined on the "weaker" chain would collapse to a few cents as all the major businesses, websites, miners, exchanges would automatically side with the "stronger" chain. If anyone thinks that they can double-spend bitcoins on different websites, after a few hours, (some accepting one fork, some the other) then they are living in dreamland.

So you think there is room for one and only one crypto-currency in the world?  I disagree.
zebedee
Donator
Hero Member
*
Offline Offline

Activity: 668
Merit: 500



View Profile
February 28, 2013, 06:51:36 AM
 #503

So we need a maximum block size that is high enough that the vast majority of nodes are comfortable with it, and isn't so big that it can be used to manipulate the difficulty by artificially slowing propagation accross the network with massive blocks. With the help of the maintaining of the propagation window through it's difficulty, we may be able to determine whether the propagation of blocks is slowing and whether the max_blocksize should be adjusted down to ensure the propagation window remains stable.

A measure of how fast blocks are propagating is the number of orphans.  If it takes 1 minute for all miners to be notified of a new block, then on average, the number of orphans would be 10%.

However, a core of miners on high speed connections could keep that down and orphans are by definition not part of the block chain.

Maybe add an orphan link as part of the header field.  If included, the block links back to 2 previous blocks, the "real" block and the orphan (this has no effect other than proving the link).  This would allow counting of orphans.  Only orphans off the main chain by 1 would be acceptable.  Also, the header of the orphan block is sufficient, the actual block itself can be discarded.

Only allowing max_block_size upward modification if the difficulty increases seems like a good idea too.

A 5% orphan rate probably wouldn't knock small miners out of things.  Economies of scale are likely to be more than that anyway.

Capping the change by 10% per 2 week interval gives a potential growth of 10X per year, which is likely to be at least as fast as the network can scale.

So, something like

Code:
if ((median of last 2016 blocks < 1/3 of the max size && difficulty_decreased) || orphan_rate > 5%)
 max_block_size /= 8th root of 2
else if(median of last 2016 blocks > 2/3 of the max size && difficulty_increased)
 max_block_size *= 8th root of 2 (= 1.09)
The issue is that if you knock out small miners, a cartel could keep the orphan rate low, and thus prevent the size from being reduced.
You can't have anything such as "orphan rate" in a rule, as there is no consensus on it.  Nothing makes my idea of orphans, and hence orphan rate, match yours.

Just like there is no consensus on the set of unconfirmed transactions on the network.
TierNolan
Legendary
*
Offline Offline

Activity: 1232
Merit: 1104


View Profile
February 28, 2013, 10:13:52 AM
Last edit: February 28, 2013, 11:23:53 AM by TierNolan
 #504

You can't have anything such as "orphan rate" in a rule, as there is no consensus on it.  Nothing makes my idea of orphans, and hence orphan rate, match yours.

I made a suggestion earlier in the thread about how to do it.

You could add an additional field into the block header.  You would have 2 links, the previous block and also, optionally, an orphan block.  The orphan block's previous block link has to be to a block in the current difficulty region.  Also, the check is a header only check, there is no need to store orphan block details.  An orphan with invalid txs would still be a valid orphan.

Since a change to MAX_BLOCK_SIZE is already a fork, you could add this to the header at that time.

The orphan rate is equal to the fraction of blocks with headers that link to a unique orphan block, i.e. if 2 blocks link to the same orphan, then that counts as only 1 orphan.

1LxbG5cKXzTwZg9mjL3gaRE835uNQEteWF
TierNolan
Legendary
*
Offline Offline

Activity: 1232
Merit: 1104


View Profile
February 28, 2013, 11:41:46 AM
 #505

Another method for allowing an increase to the max block size would be to have clients reluctantly accept larger blocks.

For example, when comparing 2 chains, you will accept a longer chain, even if it exceeds max block size, as long as the offending block is buried deep enough.

When working out the size of a block, for MAX_BLOCK_SIZE purposes, you use (block_size) / pow(1.1, depth).  So, a 10MB block that is 100 blocks from the end of the chain has an effective size of 760 bytes.  Depth in this context should probably be proof of work based, rather than number of blocks, but probably doesn't matter much, i.e. (proof of work in the chain since the block) / (proof of work per block at current difficulty).

This could be combined with users being able to set their max_block_size target in a config file.

Miners who mine very large blocks would find that users won't accept them for a while and they would end up with a higher orphan rate.  However, it prevents a permanent hard fork, if the majority of the hashing power accepts the higher block size.  Other clients will eventually accept the new chain.

1LxbG5cKXzTwZg9mjL3gaRE835uNQEteWF
twolifeinexile
Full Member
***
Offline Offline

Activity: 154
Merit: 100



View Profile
February 28, 2013, 01:53:14 PM
 #506

Another method for allowing an increase to the max block size would be to have clients reluctantly accept larger blocks.
For example, when comparing 2 chains, you will accept a longer chain, even if it exceeds max block size, as long as the offending block is buried deep enough.
When working out the size of a block, for MAX_BLOCK_SIZE purposes, you use (block_size) / pow(1.1, depth).  So, a 10MB block that is 100 blocks from the end of the chain has an effective size of 760 bytes.  Depth in this context should probably be proof of work based, rather than number of blocks, but probably doesn't matter much, i.e. (proof of work in the chain since the block) / (proof of work per block at current difficulty).
This could be combined with users being able to set their max_block_size target in a config file.
Miners who mine very large blocks would find that users won't accept them for a while and they would end up with a higher orphan rate.  However, it prevents a permanent hard fork, if the majority of the hashing power accepts the higher block size.  Other clients will eventually accept the new chain.
The point is always, if one block's growth (in terms of difficulty) is larger than the other, let's say there is 10MB block you do not like, you can not just ignore it, you have to estimate if you ignore it you will lose more or if you take you will lose more. It is an economic decision.

This may work, but still large bandwith well-connected mining hubs still have large impact (maybe legitimatelyso), besides, we are just in early stage, once bitcoin have real stream, there will be all kinds of gaming / tweaks of client behaviors to gain an edge.
zebedee
Donator
Hero Member
*
Offline Offline

Activity: 668
Merit: 500



View Profile
March 01, 2013, 12:59:05 AM
 #507

Things might be getting interesting already.  The vast majority of blocks for the last 3 hours have been around 240k, the soft limit...  one receipt of mine took 1.5hrs to confirm, which I've never had before.  And this is when hash rate is way above difficulty...
MoonShadow
Legendary
*
Offline Offline

Activity: 1708
Merit: 1010



View Profile
March 01, 2013, 01:56:31 AM
 #508

Things might be getting interesting already.  The vast majority of blocks for the last 3 hours have been around 240k, the soft limit...  one receipt of mine took 1.5hrs to confirm, which I've never had before.  And this is when hash rate is way above difficulty...

Well, so is the transaction traffic volume way above the norm.  Could be just sellers trying to get bitcoins into their MtGox accounts and buyers trying to get them out.

"The powers of financial capitalism had another far-reaching aim, nothing less than to create a world system of financial control in private hands able to dominate the political system of each country and the economy of the world as a whole. This system was to be controlled in a feudalist fashion by the central banks of the world acting in concert, by secret agreements arrived at in frequent meetings and conferences. The apex of the systems was to be the Bank for International Settlements in Basel, Switzerland, a private bank owned and controlled by the world's central banks which were themselves private corporations. Each central bank...sought to dominate its government by its ability to control Treasury loans, to manipulate foreign exchanges, to influence the level of economic activity in the country, and to influence cooperative politicians by subsequent economic rewards in the business world."

- Carroll Quigley, CFR member, mentor to Bill Clinton, from 'Tragedy And Hope'
markm
Legendary
*
Offline Offline

Activity: 2996
Merit: 1121



View Profile WWW
March 01, 2013, 12:11:20 PM
 #509

It is easy to throw around many more transactions than usual though, and maybe some people figure the more they put out there the more of a block increase they will end up causing. If everyone who wants larger blocks starts moving coins between wallets all day they can get themselves a bunch of mixing done plus maybe help their lobbying efforts.

Any manipulable decision-making system is liable to being manipulated.

-MarkM-

Browser-launched Crossfire client now online (select CrossCiv server for Galactic  Milieu)
Free website hosting with PHP, MySQL etc: http://hosting.knotwork.com/
hello_good_sir
Hero Member
*****
Offline Offline

Activity: 1008
Merit: 531



View Profile
March 03, 2013, 07:07:38 AM
 #510

Bitcoin's niche is that you have control of your money.  It is a gold replacement/complement, not a paypal replacement.  Anyone who does not understand this is simply not going to be able to produce a solution that works.

In order for bitcoin to survive the blockchain needs to fit comfortably onto the hard drive of a 5-year-old desktop.  So in 2030 the blockchain needs to fit on a computer built in 2025.  It doesn't need to fit onto phones, xboxes, microwaves, etc... but the requirement that it fit onto a mediocre desktop is non-negotiable.  Failure to meet this requirement means that the chain can no longer be audited.  This is the #1 requirement.  Nothing else matters if this is not true.  All confidence will be lost, and furthermore bitcoin will have no competitive advantage over its rivals.  Any competitive advantage due to ease of use will either be destroyed by regulation or adopted by rivals.

So basically... miner-manipulatable limits are out of the question.

Stampbit
Full Member
***
Offline Offline

Activity: 182
Merit: 100



View Profile
April 13, 2013, 09:34:55 PM
 #511

If we want to cap the time of downloading overhead the latest block to say 1%, we need to be able to download the MAX_BLOCKSIZE within 6 seconds on average so that we can spend 99% time hashing.

At 1MB, you would need a ~1.7Mbps  connection to keep downloading time to 6s.
At 10MB, 17Mbps
At 100MB, 170Mbps

and you start to see why even 100MB block size would render 90% of the world population unable to participate in mining.
Even at 10MB, it requires investing in a relatively high speed connection.

Thank you. This is the most clear explanation yet that explains how an increase in the maximum block size raises the minimum bandwidth requirements for mining nodes.
Hmm.  Header can be downloaded in parallel / separately to the block body, and hashing can start after receiving just the header.  Milliseconds amount of time.  Perhaps a "quick" list of outputs spent by the block would be useful for building non-trivial blocks that don't include double-spends, but that would be ~5% of the block size?  Plenty of room for "optimization" here were it ever an issue.

Fake headers / tx lists that don't match the actual body?  That's a black mark for the dude who gave it to you as untrustworthy.  Too many black marks and you ignore future "headers" from him as a proven time-waster.

Build up trust with your peers, just like real life.

Maybe im missing something here, why arent blocks downloaded in the background as current blocks are being worked on? Why is this bandwidth issue even an issue?
TierNolan
Legendary
*
Offline Offline

Activity: 1232
Merit: 1104


View Profile
April 13, 2013, 09:39:23 PM
 #512

Maybe im missing something here, why arent blocks downloaded in the background as current blocks are being worked on? Why is this bandwidth issue even an issue?

You can't build a a block without knowing the previous block.  You don't know that until the new block is finished.

1LxbG5cKXzTwZg9mjL3gaRE835uNQEteWF
Stampbit
Full Member
***
Offline Offline

Activity: 182
Merit: 100



View Profile
April 13, 2013, 11:36:47 PM
 #513

Maybe im missing something here, why arent blocks downloaded in the background as current blocks are being worked on? Why is this bandwidth issue even an issue?

You can't build a a block without knowing the previous block.  You don't know that until the new block is finished.

Oh i see, thats why everyone keeps suggesting parallelizing the blockchain and of course that would end up creating two separate networks where one would eventually win out anyways. Turing wins again.
Pages: « 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 [26]
  Print  
 
Jump to:  

Powered by MySQL Powered by PHP Powered by SMF 1.1.19 | SMF © 2006-2009, Simple Machines Valid XHTML 1.0! Valid CSS!