Bitcoin Forum
November 06, 2024, 07:29:49 PM *
News: Latest Bitcoin Core release: 28.0 [Torrent]
 
   Home   Help Search Login Register More  
Pages: « 1 [2] 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 »
  Print  
Author Topic: How a floating blocksize limit inevitably leads towards centralization  (Read 71582 times)
caveden
Legendary
*
Offline Offline

Activity: 1106
Merit: 1004



View Profile
February 18, 2013, 08:48:37 PM
 #21

Great posts from Mike and Gavin in this thread. There's indeed no reason to panic over "too much centralization". Actually, setting an arbitrary limit (or an arbitrary formula to set the limit) is the very definition of "central planning", while letting it get spontaneously set is the very definition of "decentralized order".

Also, having fewer participants in a market because these participants are good enough to keep aspiring competitors at bay is not a bad thing. The problem arises when barriers of entry are artificial (legal, bureaucratic etc), not when they're part of the business itself. Barriers of entry as part of the business means that the current market's participants are so advanced that everybody else wanting to enter will have to get at least as good as the current participants for a start.

Removing the block cap means a hard fork, and once we decided to do that we may as well throw in some "no brainer" upgrades as well, like supporting ed25519 which is orders of magnitude faster than ECDSA+secp256k1. Then a single strong machine can go up to hundreds of thousands of transactions per second.

That's cool. Please core devs, consider studying what other hard fork changes would be interesting to put in, because we risk hitting the 1Mb limit quite soon.
Zeilap
Full Member
***
Offline Offline

Activity: 154
Merit: 100


View Profile
February 18, 2013, 09:06:13 PM
 #22

That's cool. Please core devs, consider studying what other hard fork changes would be interesting to put in, because we risk hitting the 1Mb limit quite soon.
Seems they've read your mind https://en.bitcoin.it/wiki/Hardfork_Wishlist  Wink.
OhShei8e
Legendary
*
Offline Offline

Activity: 1792
Merit: 1059



View Profile
February 18, 2013, 09:08:01 PM
 #23

I think we should put users first. What do users want? They want low transaction fees and fast confirmations.

This comes down to Bitcoin as a payment network versus Bitcoin as a store of value. I thought it was already determined that there will always be better payment networks that function as alternatives to Bitcoin. A user who cares about the store of value use-case, is going to want the network hash rate to be as high as possible. This is at odds with low transaction fees and fast confirmations.

People invest because others do the same. Money follows money. The larger the user base, the higher the value of the bitcoin. This has nothing to do with the hash rate. Nevertheless, the hash rate will be gigantic.
Mike Hearn
Legendary
*
expert
Offline Offline

Activity: 1526
Merit: 1134


View Profile
February 18, 2013, 09:12:34 PM
 #24

Network assurance contracts are far from a sure thing. It's basically an attempt to solve the tragedy of the commons, and the success rate society has had there is pitiful, even with strong central authorities. Assuming they will work is a big risk.

Sure, there is some risk, but Kickstarter is showing that the general concept can indeed fund public goods.

I don't see any reason to think CPU power will be the issue. It's network capacity and disk space that is the problem.

1.2 megabytes a second is only ~10 megabits per second - pretty sure my parents house has more bandwidth than that. Google is wiring Kansas City with gigabit fibre right now, and we're not running it as a charity. So network capacity doesn't worry me a whole lot. There's plenty of places in the world that can keep up with a poxy 10 megabits.

3T per month of transfer is again, not a big deal. For a whopping $75 per month bitvps.com will rent you a machine that has 5TB of bandwidth quota per month and 100mbit connectivity.

Lots of people can afford this. But by the time Bitcoin gets to that level of traffic, if it ever does, it might cost more like $75 a year.

You also have to ask the question, what % of that 3TiB/month results in unspent txouts? Ultimately it's the UTXO set that is the hard limit on the storage requirements for full validating nodes. Even at just 1% volume growth, you're looking at 3GiB/month growth in your requirement for fast-random-access memory.

How did you arrive at 3GB/month? The entire UTXO set currently fits in a few hundred megs of RAM.

All the time you're spending waiting for transactions to be retrieved from memory is time you aren't hashing.

Why? Hashing happens in parallel to checking transactions and recalculating the merkle root.

You're example has nothing to do with Bitcoin. Even in the early days it would be obvious to anyone who understood comp-sci that static websites are O(1) scaling per client so there isn't any reason to think you couldn't create websites for as much load as you wanted.

Nobody in 1993 could build a website that the entire world used all the time (like Google or Wikipedia). The technology did not exist.

Following your line of thinking, there should have been some way to ensure only the elite got to use the web. Otherwise how would it work? As it got too popular all the best websites would get overloaded and fall over. Disaster.

Or what about the global routing table? Every backbone router needs a complete copy of the routing table. BGP is a broadcast network. How can the internet backbone scale? Perhaps we should only allow people to access the internet at universities to avoid uncontrollable growth of the routing table.

I just don't see scalability as ever being a problem, assuming effort is put into better software. Satoshi didn't think this would be a problem either, it was one of the first conversations we ever had. These conversations have been going around and around for years. I am unconvinced we're developing better insight into it anymore. Satoshis vision was for the block limit to be removed. So let's do it.
Zeilap
Full Member
***
Offline Offline

Activity: 154
Merit: 100


View Profile
February 18, 2013, 09:42:38 PM
 #25

It seems like the requirements for full verification, and those for mining are being conflated. Either way, I see the following solutions.

If you need to run a full verification node, then, as Mike points out, you can rent the hardware to do it along with enough bandwidth. If full verification becomes too much work for a general purpose computer, then we'll begin to see 'node-in-a-box' set-ups where the network stuff is managed by an embedded processor and the computation farmed out to an FPGA/ASIC. Alternatively, we'll see distributed nodes among groups of friends, where each verifies some predetermined subset. This way, you don't have to worry about random sampling. You can then even tell the upstream nodes from you to only send the transactions which are within your subset, so your individual bandwidth is reduced to 1/#friends.

If you want to be a miner, you can run a modified node on rented hardware as above, which simply gives you a small number of transactions to mine on, rather than having to sort them out yourself. This way, you can reduce your bandwidth to practically nothing - you'd get a new list each time a block is mined.
Peter Todd (OP)
Legendary
*
expert
Offline Offline

Activity: 1120
Merit: 1160


View Profile
February 18, 2013, 09:46:16 PM
 #26

Network assurance contracts are far from a sure thing. It's basically an attempt to solve the tragedy of the commons, and the success rate society has had there is pitiful, even with strong central authorities. Assuming they will work is a big risk.

Sure, there is some risk, but Kickstarter is showing that the general concept can indeed fund public goods.

For consumer products where you get a tangible object in return. Security through hashing power is nothing like kickstarter.

1.2 megabytes a second is only ~10 megabits per second - pretty sure my parents house has more bandwidth than that. Google is wiring Kansas City with gigabit fibre right now, and we're not running it as a charity. So network capacity doesn't worry me a whole lot. There's plenty of places in the world that can keep up with a poxy 10 megabits.

No, that's 1.2MiB average; you need well above that to keep your orphan rate down.

Again, you're making assumptions about the hardware available in the future, and big assumptions. And again you are making it impossible to run a Bitcoin node in huge swaths of the world, not to mention behind Tor.

You also have to ask the question, what % of that 3TiB/month results in unspent txouts? Ultimately it's the UTXO set that is the hard limit on the storage requirements for full validating nodes. Even at just 1% volume growth, you're looking at 3GiB/month growth in your requirement for fast-random-access memory.

How did you arrive at 3GB/month? The entire UTXO set currently fits in a few hundred megs of RAM.

I'm assuming 1% of transactions per month get added to the UTXO set. With cheap transactions increased UTXO set consumption for trivial purposes, like satoshidice's stupid failed bet messaging and timestamping, is made more likely so I suspect 1% is reasonable.

Again, other than making old UTXO's eventually become unspendable, I don't see any good solutions to UTXO growth.

All the time you're spending waiting for transactions to be retrieved from memory is time you aren't hashing.

Why? Hashing happens in parallel to checking transactions and recalculating the merkle root.

I mean proof of work hashing for mining. If you don't know what transactions were spent by the previous block, you can't safely create the next block without accidentally including a transaction spent by the previous one, and thus invalidating your block.

You're example has nothing to do with Bitcoin. Even in the early days it would be obvious to anyone who understood comp-sci that static websites are O(1) scaling per client so there isn't any reason to think you couldn't create websites for as much load as you wanted.

Nobody in 1993 could build a website that the entire world used all the time (like Google or Wikipedia). The technology did not exist.

Don't be silly. Even in 1993 people knew that you would be able to do things like have DNS servers return different IP's each time - Netscape's 1994 homepage used hard-coded client-side load-balancing implemented in the browser for instance.

DNS is another good example: the original hand-maintained hosts.txt file was unscalable, and sure enough it was replaced by a the hierarchical and scalable DNS system in the mid 80's.

Or what about the global routing table? Every backbone router needs a complete copy of the routing table. BGP is a broadcast network. How can the internet backbone scale? Perhaps we should only allow people to access the internet at universities to avoid uncontrollable growth of the routing table.

...and what do you know, one of the arguments for IPv6 even back in the early days in the early 90's was the IPv4 routing space wasn't very hierarchical and would lead to scaling problems for routers down the line. The solution implemented has been to use various technological and administrative measures to keep top-level table growth in control. In 2001 there were 100,000 entries, and 12 years later in 2013 there are 400,000 - nearly linear growth. Fortunately the nature of the global routing table is that linear top-level growth can support quadratic and more growth in the number of underlying nodes; getting access to the internet does not contribute to the scaling problem of the routing table.

On the other hand, getting provider-independent address space, a resource that does increase the burden on the global routing table, gets harder and harder every year. Like Bitcoin it's an O(n^2) scaling problem, and sure enough the solution followed has been to keep n as low as possible.

The way the internet has actually scaled is more like what I'm proposing with fidelity-bonded chaum banks: some number of n banks, each using up some number of transactions per month, but in turn supporting a much larger number m of clients. The scaling problem is solved hierarchically, and thus becomes tractable.

Heck, while we're playing this game, find me a single major O(n^2) internet scaling problem that's actually been solved by "just throwing more hardware at it", because I sure can't.

I just don't see scalability as ever being a problem, assuming effort is put into better software. Satoshi didn't think this would be a problem either, it was one of the first conversations we ever had. These conversations have been going around and around for years. I am unconvinced we're developing better insight into it anymore. Satoshis vision was for the block limit to be removed. So let's do it.

Appeal to authority. Satoshi also didn't make the core and the GUI separate, among many, many other mistakes and oversights, so I'm not exactly convinced I should assume that just because he thought Bitcoin could scale it actually can.

Pieter Wuille
Legendary
*
qt
Offline Offline

Activity: 1072
Merit: 1181


View Profile WWW
February 18, 2013, 10:02:52 PM
 #27

First of all, my opinion: I'm in favor of increasing the block size limit in a hard fork, but very much against removing the limit entirely. Bitcoin is a consensus of its users, who all agreed (or will need to agree) to a very strict set of rules that would allow people to build global decentralized payment system. I think very few people understand a forever-limited block size to be part of these rules.

However, with no limit on block size, it effectively becomes miners who are in control of _everyone_'s block size. As a non-miner, this is not something I want them to decide for me. Perhaps the tragedy of the commons can be avoided, and long-term rational thinking will kick in, and miners can be trusted with choosing an appropriate block size. But maybe not, and if just one miner starts creating gigabyte blocks, while all the rest agrees on 10 MiB blocks, ugly block-shunning rules will be necessary to avoid such blocks from filling everyone's hard drive (yes, larger block's slower relay will make them unlikely to be accepted, but it just requires one lucky fool to succeed...).

I think retep raises very good points here: the block size (whether voluntarily or enforced) needs to result in a system that remains verifiable for many. What those many are will probably change gradually. Over time, more and more users will probably move to SPV nodes (or more centralized things like e-wallet sites), and that is fine. But if we give up the ability for non-megacorp entities to be able to verify the chain, we might as well be using those a central clearinghouse. There is of course wide spectrum between "I can download the entire chain on my phone" and "Only 5 bank companies in the world can run a fully verifying node", but I think it's important that we choose what point in between there is acceptable.

My suggestion would be a one-time increase to perhaps 10 MiB or 100 MiB blocks (to be debated), and after that an at-most slow exponential further growth. This would mean no for-eternity limited size, but also no way for miners to push up block sizes to the point where they are in sole control of the network. I realize that some people will consider this an arbitrary and unnecessary limit, but others will probably consider it dangerous already. In any case, it's a compromise and I believe one will be necessary.

Great posts from Mike and Gavin in this thread. There's indeed no reason to panic over "too much centralization". Actually, setting an arbitrary limit (or an arbitrary formula to set the limit) is the very definition of "central planning", while letting it get spontaneously set is the very definition of "decentralized order".

Then I think you misunderstand what a hard fork entails. The only way a hard fork can succeed is when _everyone_ agrees to it. Developers, miners, merchants, users, ... everyone. A hard fork that succeeds is the ultimate proof that Bitcoin as a whole is a consensus of its users (and not just a consensus of miners, who are only given authority to decide upon the order of otherwise valid transactions).

Realize that Bitcoin's decentralization only comes from very strict - and sometimes arbitrary - rules (why this particular 50/25/12.5 payout scheme, why ECDSA, why only those opcodes in scripts, ...) that were set right from the start and agreed upon by everyone who ever used the system. Were those rules "central planning" too?

I do Bitcoin stuff.
cjp
Full Member
***
Offline Offline

Activity: 210
Merit: 124



View Profile WWW
February 18, 2013, 10:09:32 PM
 #28

It is a technical decision, not political. The block size can not be determined on the basis of political beliefs. I'm pretty sure about this.

I disagree. Any decision that has political consequences is a political decision, whether you deny/ignore it or not. I even doubt whether technical, non-political decisions actually exist. You develop technology with a certain goal in mind, and the higher goal is usually of a political nature. So, when you propose a decision, please explicitly list the (political) goals you want to achieve, and all the expected (desired and undesired) (political) side-effects of your proposal. That way, the community might come to an informed consensus about your decision.

Regarding the transaction limit, I see the following effects (any of which can be chosen as goals / "antigoals"):
  • Increasing/removing the limit can lead to centralization of mining, as described by the OP (competition elimination by bandwidth)
  • Increasing/removing the limit can lead to reduced security of the network (no transaction scarcity->fee=almost zero->difficulty collapse->easy 51% attack and other attacks). I think this was a mistake of Satoshi, but it can be solved by keeping a reasonable transaction limit (or keep increasing beyond 21M coins, but that would be even less popular in the community).
  • Centralization of mining can lead to control over mining (e.g. 51% attack, but also refusal to include certain transactions, based on arbitrary policies, possibly enforced by governments on the few remaining miners)
  • Increasing/removing the limit allows transaction volume to increase
  • Increasing/removing the limit allows transaction fees to remain low
  • Increasing/removing the limit increases hardware requirements of full nodes

Also: +100 for Pieter Wuille's post. It's all about community consensus. And my estimate is that 60MiB/block should be sufficient for worldwide usage, if my Ripple-like system becomes successful (otherwise it would have to be 1000 times more). I'd agree with a final limit of 100MiB, but right now that seems way too much, considering current Internet speeds and storage capacity. So I think we need to increase it at least 2 times.

Donate to: 1KNgGhVJx4yKupWicMenyg6SLoS68nA6S8
http://cornwarecjp.github.io/amiko-pay/
hazek
Legendary
*
Offline Offline

Activity: 1078
Merit: 1003


View Profile
February 18, 2013, 11:14:46 PM
 #29

In the light of this, and because the need for bitcoins primarily comes from the need for a decentralized, no-point-of-control system, I think it's not sufficient to call worries about centralization "vague": you have to clearly defend why this particular form of centralization can not be dangerous. The default is "centralization is bad".

It is a technical decision, not political. The block size can not be determined on the basis of political beliefs. I'm pretty sure about this.

If we talking about centralization we should focus on Mt. Gox but that's a different story.

It is a technical decision, but technical decision about how to provide both scalability and security. And like it or not, decentralization is part of the security equation and must be taken in account when changing anything that would diminish it.

My personality type: INTJ - please forgive my weaknesses (Not naturally in tune with others feelings; may be insensitive at times, tend to respond to conflict with logic and reason, tend to believe I'm always right)

If however you enjoyed my post: 15j781DjuJeVsZgYbDVt2NZsGrWKRWFHpp
hazek
Legendary
*
Offline Offline

Activity: 1078
Merit: 1003


View Profile
February 18, 2013, 11:17:32 PM
 #30

Network assurance contracts are far from a sure thing. It's basically an attempt to solve the tragedy of the commons, and the success rate society has had there is pitiful, even with strong central authorities. Assuming they will work is a big risk.

Sure, there is some risk, but Kickstarter is showing that the general concept can indeed fund public goods.

Off topic:
It's interesting you say that, I'm guessing you don't think there was a need for a Bitcoin Foundation then do you?

My personality type: INTJ - please forgive my weaknesses (Not naturally in tune with others feelings; may be insensitive at times, tend to respond to conflict with logic and reason, tend to believe I'm always right)

If however you enjoyed my post: 15j781DjuJeVsZgYbDVt2NZsGrWKRWFHpp
hazek
Legendary
*
Offline Offline

Activity: 1078
Merit: 1003


View Profile
February 18, 2013, 11:38:38 PM
 #31

I feel these debates have been going on for years. We just have wildly different ideas of what is affordable or not.

I don't think the most fundamental debate is about how high the limit should be. I made some estimates about how high it would have to be for worldwide usage, which is quite a wild guess, and I suppose any estimation about what is achievable with either today's or tomorrow's technology is also a wild guess. We can only hope that what is needed and what is possible will somehow continue to match.

But the most fundamental debate is about whether it is dangerous to (effectively) disable the limit. These are some ways to effectively disable the limit:
  • actually disabling it
  • making it "auto-adjusting" (so it can increase indefinitely)
  • making it so high that it won't ever be reached

I think the current limit will have to be increased at some point in time, requiring a "fork". I can imagine you don't want to set the new value too low, because that would make you have to do another fork in the future. Since it's hard to know what's the right value, I can imagine you want to develop an "auto-adjusting" system, similar to how the difficulty is "auto-adjusting". However, if you don't do this extremely carefully, you could end up effectively disabling the limit, with all the potential dangers discussed here.

You have to carefully choose the goal you want to achieve with the "auto-adjusting", and you have to carefully choose the way you measure your "goal variable", so that your system can control it towards the desired value (similar to how difficulty adjustments steers towards 10minutes/block).

One "goal variable" would be the number of independent miners (a measure of decentralization). How to measure it? Maybe you can offer miners a reward for being "non-independent"? If they accept that reward, they prove non-independence of their different mining activities (e.g. different blocks mined by them); the reward should be larger than the profits they could get from further centralizing Bitcoin. This is just a vague idea; naturally it should be thought out extremely carefully before even thinking of implementing this.


Of all the posts this one makes the most sense to me - a layman. This I'm aware means practically nothing aside from me not being willing to download a new version of Bitcoin-Qt if I wont like the hard fork rules. A carefully chosen auto-adjusting block size limit that makes the space scarce and encourages fees keeping mining reasonable open to competition while solving the scalability issue seems like a good compromise.

But how many of all transactions should on average fit into a block? 90%? 80%? 50%? Can anyone come up with some predictions and estimates how various auto-adjusting rules could potentially play out?

My personality type: INTJ - please forgive my weaknesses (Not naturally in tune with others feelings; may be insensitive at times, tend to respond to conflict with logic and reason, tend to believe I'm always right)

If however you enjoyed my post: 15j781DjuJeVsZgYbDVt2NZsGrWKRWFHpp
Zeilap
Full Member
***
Offline Offline

Activity: 154
Merit: 100


View Profile
February 19, 2013, 12:31:36 AM
 #32

But how many of all transactions should on average fit into a block? 90%? 80%? 50%? Can anyone come up with some predictions and estimates how various auto-adjusting rules could potentially play out?
If you want the worst case then consider this:

Some set of miners decide, as Peter suggests, to increase the blocksize in order to reduce competition. Thinking longterm, they decide that a little money lost now is worth the rewards of controlling a large portion of the mining of the network.
1) The miners create thousands of addresses and send funds between them as spam (this is the initial cost)
  a) optional - add enough transaction fee so that legitimate users get upset about tx fees increasing and call for blocksize increases
2) The number of transactions is now much higher than the blocksize allows, forcing the auto-adjust to increase blocksize
3) while competition still exists, goto step 1
4) Continue sending these spam transactions to maintain high blocksize. Added bonus, as the transaction fee you pay is to yourself - i.e. the transaction is free!
5) Profit!
misterbigg
Legendary
*
Offline Offline

Activity: 1064
Merit: 1001



View Profile
February 19, 2013, 12:54:47 AM
 #33

You have to carefully choose the goal you want to achieve with the "auto-adjusting"

Here's a starting point.

A periodic block size adjustment should occur that:

- Maintains some scarcity to maintain transaction fees
- But not too much scarcity (i.e. 1MB limit forever)
- Doesn't incentivize miners to game the system

fornit
Hero Member
*****
Offline Offline

Activity: 991
Merit: 1011


View Profile
February 19, 2013, 12:56:24 AM
 #34

Quote
There is of course wide spectrum between "I can download the entire chain on my phone" and "Only 5 bank companies in the world can run a fully verifying node", but I think it's important that we choose what point in between there is acceptable.

+1
and btw, i dont think mike hearns "yay, lets all have glass fibre by next year!" is that acceptable middle ground.

My suggestion would be a one-time increase to perhaps 10 MiB or 100 MiB blocks (to be debated), and after that an at-most slow exponential further growth.

i totally agree with this. let the networt run for a little more, make a conservative one-time increase in block size, then use the time to analyse the final phase of the 1mb limit and the effect of the increased limit and plan another hard fork, possibly with an  - again conservatively - adjusting limit.
i am all for radical action if its necessary. but right now transaction fees are still very low and there is really no need to be radical. its just an unnecessary risk.

johnyj
Legendary
*
Offline Offline

Activity: 1988
Merit: 1012


Beyond Imagination


View Profile
February 19, 2013, 01:07:13 AM
 #35

Enjoyed reading many great posts here!

If there will be a hard fork from time to time, then I prefer keep the change as progressive/small as possible

If there will be only ONE hard fork ever, then it is a high risk gamble

justusranvier
Legendary
*
Offline Offline

Activity: 1400
Merit: 1013



View Profile
February 19, 2013, 01:11:25 AM
 #36

However, with no limit on block size, it effectively becomes miners who are in control of _everyone_'s block size. As a non-miner, this is not something I want them to decide for me. Perhaps the tragedy of the commons can be avoided, and long-term rational thinking will kick in, and miners can be trusted with choosing an appropriate block size. But maybe not, and if just one miner starts creating gigabyte blocks, while all the rest agrees on 10 MiB blocks, ugly block-shunning rules will be necessary to avoid such blocks from filling everyone's hard drive (yes, larger block's slower relay will make them unlikely to be accepted, but it just requires one lucky fool to succeed...).
In a different thread Gavin proposed removing the hard limit on block size and adding code to the nodes that would reject any blocks that take too long to verify.

That would give control over the size of the blocks to the people who run full nodes.
jojkaart
Member
**
Offline Offline

Activity: 97
Merit: 10


View Profile
February 19, 2013, 01:22:55 AM
 #37

How about tying the maximum block size to mining difficulty?

This way, if the fees start to drop, this is counteracted with the shrinking block size. The only time this counteracting won't be effective is when usage is actually dwindling at the same time.
If the fees start to increase, this is also counteracted with increasing the block size as more mining power comes online.

The difficulty also goes up with increasing hardware capabilities, I'd expect that the difficulty increase due to this factor will track the increase of technical capabilities of computers in general.
commonancestor
Newbie
*
Offline Offline

Activity: 58
Merit: 0


View Profile
February 19, 2013, 02:42:45 AM
 #38

Interesting debate.

First of all, my opinion: I'm in favor of increasing the block size limit in a hard fork, but very much against removing the limit entirely. Bitcoin is a consensus of its users, who all agreed (or will need to agree) to a very strict set of rules that would allow people to build global decentralized payment system. I think very few people understand a forever-limited block size to be part of these rules.

...

My suggestion would be a one-time increase to perhaps 10 MiB or 100 MiB blocks (to be debated), and after that an at-most slow exponential further growth. This would mean no for-eternity limited size, but also no way for miners to push up block sizes to the point where they are in sole control of the network. I realize that some people will consider this an arbitrary and unnecessary limit, but others will probably consider it dangerous already. In any case, it's a compromise and I believe one will be necessary.

Realize that Bitcoin's decentralization only comes from very strict - and sometimes arbitrary - rules (why this particular 50/25/12.5 payout scheme, why ECDSA, why only those opcodes in scripts, ...) that were set right from the start and agreed upon by everyone who ever used the system. Were those rules "central planning" too?

I tend to agree with Pieter.

First of all, the true nature of Bitcoin seems to be the rigid protocol as it helps the credibility among masses. Otherwise one day you remove block size limit, next day remove ECDSA, then change block frequency to 1 per minute, then print more coins. It actually sounds more appropriate to do such changes under a different implementation.

Then I can't help this: With such floating block limit isn't everyone afraid of chain splits? I can imagine a split occurring by a big block being accepted by 60% of the nodes and rejected by the rest.

How about tying the maximum block size to mining difficulty?
...
The difficulty also goes up with increasing hardware capabilities, I'd expect that the difficulty increase due to this factor will track the increase of technical capabilities of computers in general.

This sounds interesting.
SimonL
Member
**
Offline Offline

Activity: 113
Merit: 11


View Profile
February 19, 2013, 03:03:44 AM
 #39

I actually posted the below in the max_block_size fork thread but got absolutely no feedback on it so rather than create a new thread to get exposure on it, I am reposting it here in full as something to think about with regards to moving towards having a fairly simple process to create a floating blocksize for the network that is conservative enough to avoid abuse and will work in tandem with difficulty so no new mechanisms need to be made. I know there are probably a number of holes in the idea but I think it's a start and could be made viable so that we get a system that allows blocks to get bigger, but doesn't run out of control such that only large miners can participate, and also avoids situations where manipulations of difficulty could occur if there were no max blocksize limit. Ok, here goes.

I've been stewing over this problem for a while and would just like to think aloud here....

I very much think the blocksize should be network regulated much like difficulty is used to regulate propagation windows based on the amount of computation cycles used to find hashes for particular difficulty targets. To clarify, when I say CPU I mean CPUs, GPUs, and ASICs collectively.

Difficulty is very much focused on the network's collective CPU cycles to control propagation windows (1 block every 10 mins), avoid 51% attacks, and distribute new coins.

However the max_blocksize is not related to computing resources to validate transactions and regular block propagation, it is geared much more to network speed, storage capacity of miners (and includes even non-mining full nodes) and verification of transactions (which as I understand it means hammering the disk). What we need to determine is whether the nodes supporting the network can quickly and easily propagate blocks while not having this affect the propagation window.

Interestingly there is a connection between CPU resources, the calculation of the propagation window with difficulty targets, and network propagation health. If we have no max_blocksize limit in place, it leaves the network open to a special type of manipulation of the difficulty.

The propagation window can be manipulated in two ways as I see it, one is creating more blocks as we classically know, throw more CPUs at block creation, and we transmit more blocks, more computation power = more blocks produced, and the difficulty ensures the propagation window doesn't get manipulated this way. The difficulty is measured by timestamps in the blocks to determine whether more or less blocks in a certain period were created and whether difficulty goes up or down. All taken care of.

The propagation window could also be manipulated in a more subtle way though, that being transmission of large blocks (huge blocks in fact). Large blocks take longer to transmit, longer to verify, and longer to write to disk, though this manipulation of the number of blocks being produced is unlikely to be noticed until a monster block gets pushed across the network (in a situation where there is no limit on blocksize that is). Now because there is only a 10 minute window the block can't take longer than that I'm guessing. If it does, difficulty will sink and we have a whole new problem, that being manipulation of the difficulty through massive blocks. Massive blocks could mess with difficulty and push out smaller miners, causing all sorts of undesirable centralisations. In short, it would probably destroy the Bitcoin network.

So we need a maximum block size that is high enough that the vast majority of nodes are comfortable with it, and isn't so big that it can be used to manipulate the difficulty by artificially slowing propagation accross the network with massive blocks. With the help of the maintaining of the propagation window through it's difficulty, we may be able to determine whether the propagation of blocks is slowing and whether the max_blocksize should be adjusted down to ensure the propagation window remains stable.

Because the difficulty can be potentially manipulated this way we could possibly have a means of knowing what the Bitcoin network is comfortable with propagating. And it could be determined thusly:

If the median size of the blocks transmitted in the last difficulty period is bumping up against the max_blocksize (median being chosen to avoid situations where one malicious entity, or entities tries to arbitrarily push up the max_blocksize limit), and the difficulty is "stable", increase the max_blocksize (say by 10%) for the next difficulty period (say the median is within 20% of the max_blocksize), but if the median size of blocks for the last period is much lower (say less than half the current blocksize_limit), then lower the size by 20% instead.

However, if the If the median size of the blocks transmitted in the last difficulty period is bumping up against the max_blocksize and the difficulty is NOT stable, don't increase the max_blocksize since there is a possibility that the network is not currently healthy and increasing or decreasing the max_blocksize is a bad idea. Or alternatively in those situations lower the max_blocksize by 10% for the next difficulty period anyway (not sure if this is a good idea or not though).

In either case the 1mb max_blocksize should be the lowest the blocksize should go to if it continued to shrink. Condensing all that down to pseudocode...

Code:
IF(Median(blocksize of last difficulty period) is within 10% of current max_block_size 
AND new difficulty is **higher** than previous period's difficulty),
    THEN raise max_block_size for next difficulty period by 10%

otherwise,

Code:
IF(Median(blocksize of last difficulty period) is within 10% of current max_block_size 
AND new difficulty is **lower** than previous period's difficulty),
    THEN lower max_block_size for next difficulty period by 10% UNLESS it is less than the minimum of 1mb.


Checking the stability of the last difficulty period and the next one is what determines whether the network is spitting out blocks at a regular rate or not, if the median blocksize of blocks transmitted in the last difficulty period is bumping up against the limit, and difficulty is going down, it could mean a significant number of nodes can't keep up, esp. if the difficulty needs to be moved down, that means that blocks aren't getting to all the nodes in time and hashing capacity is getting cut off because they are too busy verifying the blocks they received. If the difficulty is going up and median block size is bumping up against the limit, then there's a strong indication that nodes are all processing the blocks they receive easily and so raising the max_blocksize limit a little should be OK. The one thing I'm not sure of though is determining whether the difficulty is "stable" or not, I'm very much open to suggestions the best way of doing that. The argument that what is deemed "stable" is arbitrary and could still lead to manipulation of the max_blocksize, just over a longer and more sustained period I think is possible too, so I'm not entirely sure this approach could be made foolproof, how does calculating of difficulty targets take these things into consideration?

OK, guys, tear it apart.
Jutarul
Donator
Legendary
*
Offline Offline

Activity: 994
Merit: 1000



View Profile
February 19, 2013, 06:02:08 AM
 #40

I think we should put users first. What do users want? They want low transaction fees and fast confirmations.
This comes down to Bitcoin as a payment network versus Bitcoin as a store of value. I thought it was already determined that there will always be better payment networks that function as alternatives to Bitcoin. A user who cares about the store of value use-case, is going to want the network hash rate to be as high as possible. This is at odds with low transaction fees and fast confirmations.
This!

Bitcoin is about citizen empowerment. When ordinary citizen can't run their own validating nodes anymore you lost that feature (independent from the question of hashing). Then bitcoin is commercialized. The bitcoin devs need to keep that in mind. (If you need to freshen up on your brainwash, here's a great presentation from Rick Falkvinge: http://www.youtube.com/watch?v=mjmuPqkVwWc)

The ASICMINER Project https://bitcointalk.org/index.php?topic=99497.0
"The way you solve things is by making it politically profitable for the wrong people to do the right thing.", Milton Friedman
Pages: « 1 [2] 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 »
  Print  
 
Jump to:  

Powered by MySQL Powered by PHP Powered by SMF 1.1.19 | SMF © 2006-2009, Simple Machines Valid XHTML 1.0! Valid CSS!