Bitcoin Forum
November 01, 2024, 02:42:10 AM *
News: Latest Bitcoin Core release: 28.0 [Torrent]
 
   Home   Help Search Login Register More  
Pages: « 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 [16] 17 18 19 20 21 22 23 24 25 26 »
  Print  
Author Topic: How a floating blocksize limit inevitably leads towards centralization  (Read 71574 times)
Technomage
Legendary
*
Offline Offline

Activity: 2184
Merit: 1056


Affordable Physical Bitcoins - Denarium.com


View Profile WWW
February 21, 2013, 03:35:31 PM
Last edit: February 21, 2013, 03:52:22 PM by Technomage
 #301

Ten times the block size seems like scarcity is banished far into the future in one huge jump.

Even just doubling it is a massive increase, especially while blocks are typically still far from full.

Thus to me it seems better never to more than double it in any one jump.

If relating those doublings to the halvings of block-subsidy it too slow a rate of increase then maybe use Moore's Law or thereabouts, increasing by 50% yearly or by 100% every eighteen months.

It is hard to feel like there is anywhere close to being a "need" for more space when I have never yet ever had to pay a fee to transfer bitcoins.

The rationale for the 10MB cap is that it would allow us to scale to PayPal tx level right away, and it's arguable that Bitcoin might not actually need more than that. The second rationale is that it would still allow running full nodes by regular people, thus retaining decentralization. Third rationale is that the issue of scarcity can actually be postponed because it won't be an issue for a long time. We're still in the era of large fixed block reward and we are very slowly moving into the "small fixed reward" era.

I have sort of started liking the idea that we would double the block size on each block halving though. The only problem with that is the fact that if the amount of Bitcoin transactions stop growing for some reason not related to this, but there is still very high value (even growing value) in the blockchain, it would lead to the blocksize rising without an increase in transactions. Thus it would lead to lessened protection for the network even though the value in the blockchain might still be very large or even growing.

This is a potential issue with a 10MB limit as well, but I have a hard time believing that. Bitcoin only needs to grow like 20 fold to start pushing the 10MB limit. Pushing it wouldn't be bad either, 70 tx/s should be enough for a lot of things. We could just let free transactions and super low fee transactions not get (fast) confirmations at that point. That is okay I think. The 7 tx/s cap that we have now is simply not going to be enough, that is pretty clear. It's too limiting.

However, I do agree that this whole issue is not something that we need to do now. The blocks do not currently have scarcity to speak of. This is all about creating a plan for what we're going to do in the future. The actual hard fork will happen earliest one year from now.

Denarium closing sale discounts now up to 43%! Check out our products from here!
d'aniel
Sr. Member
****
Offline Offline

Activity: 461
Merit: 251


View Profile
February 21, 2013, 03:57:58 PM
 #302

If we want to cap the time of downloading overhead the latest block to say 1%, we need to be able to download the MAX_BLOCKSIZE within 6 seconds on average so that we can spend 99% time hashing.

At 1MB, you would need a ~1.7Mbps  connection to keep downloading time to 6s.
At 10MB, 17Mbps
At 100MB, 170Mbps

and you start to see why even 100MB block size would render 90% of the world population unable to participate in mining.
Even at 10MB, it requires investing in a relatively high speed connection.

Also of importance is the fact that local bandwidth and international bandwidths can wary by large amounts. A 1Gbps connection in Singapore(http://www.starhub.com/broadband/plan/maxinfinitysupreme.html) only gives you 100Mbps international bandwidth meaning you only have 100Mbps available for receiving mining blocks.

Since a couple people have thanked the author for posting this, I thought I should mention that only transaction hashes need to be sent in bursts.  So a block of 1000 transactions (roughly 1MB) only requires 30KB of data to be sent in a burst, requiring a ~43Kbps connection to keep downloading time to 6s.  100MB blocks require ~4.3Mbps.  The continuous downloading of transaction data is below these limits.
wtfvanity
Hero Member
*****
Offline Offline

Activity: 504
Merit: 500


WTF???


View Profile
February 21, 2013, 04:07:08 PM
 #303

Ten times the block size seems like scarcity is banished far into the future in one huge jump.

Even just doubling it is a massive increase, especially while blocks are typically still far from full.

Thus to me it seems better never to more than double it in any one jump.

If relating those doublings to the halvings of block-subsidy it too slow a rate of increase then maybe use Moore's Law or thereabouts, increasing by 50% yearly or by 100% every eighteen months.

It is hard to feel like there is anywhere close to being a "need" for more space when I have never yet ever had to pay a fee to transfer bitcoins.

The rationale for the 10MB cap is that it would allow us to scale to PayPal tx level right away, and it's arguable that Bitcoin might not actually need more than that. The second rationale is that it would still allow running full nodes by regular people, thus retaining decentralization. Third rationale is that the issue of scarcity can actually be postponed because it won't be an issue for a long time. We're still in the era of large fixed block reward and we are very slowly moving into the "small fixed reward" era.

I have sort of started liking the idea that we would double the block size on each block halving though. The only problem with that is the fact that if the amount of Bitcoin transactions stop growing for some reason not related to this, but there is still very high value (even growing value) in the blockchain, it would lead to the blocksize rising without an increase in transactions. Thus it would lead to lessened protection for the network even though the value in the blockchain might still be very large or even growing.

This is a potential issue with a 10MB limit as well, but I have a hard time believing that. Bitcoin only needs to grow like 20 fold to start pushing the 10MB limit. Pushing it wouldn't be bad either, 70 tx/s should be enough for a lot of things. We could just let free transactions and super low fee transactions not get (fast) confirmations at that point. That is okay I think. The 7 tx/s cap that we have now is simply not going to be enough, that is pretty clear. It's too limiting.

However, I do agree that this whole issue is not something that we need to do now. The blocks do not currently have scarcity to speak of. This is all about creating a plan for what we're going to do in the future. The actual hard fork will happen earliest one year from now.


I'm not saying that 10x is the magical number. I'm saying that both mining and running a full client are still easily done at 10 meg blocks.






If we want to cap the time of downloading overhead the latest block to say 1%, we need to be able to download the MAX_BLOCKSIZE within 6 seconds on average so that we can spend 99% time hashing.

At 1MB, you would need a ~1.7Mbps  connection to keep downloading time to 6s.
At 10MB, 17Mbps
At 100MB, 170Mbps

and you start to see why even 100MB block size would render 90% of the world population unable to participate in mining.
Even at 10MB, it requires investing in a relatively high speed connection.

Also of importance is the fact that local bandwidth and international bandwidths can wary by large amounts. A 1Gbps connection in Singapore(http://www.starhub.com/broadband/plan/maxinfinitysupreme.html) only gives you 100Mbps international bandwidth meaning you only have 100Mbps available for receiving mining blocks.

Since a couple people have thanked the author for posting this, I thought I should mention that only transaction hashes need to be sent in bursts.  So a block of 1000 transactions (roughly 1MB) only requires 30KB of data to be sent in a burst, requiring a ~43Kbps connection to keep downloading time to 6s.  100MB blocks require ~4.3Mbps.  The continuous downloading of transaction data is below these limits.

The full block download and verification isn't needed to start hashing the next block?

          WTF!     Don't Click Here              
          .      .            .            .        .            .            .          .        .     .               .            .             .            .            .           .            .     .               .         .              .           .            .            .            .     .      .     .    .     .          .            .          .            .            .           .              .     .            .            .           .            .               .         .            .     .            .            .             .            .              .            .            .      .            .            .            .            .            .            .             .          .
Jutarul
Donator
Legendary
*
Offline Offline

Activity: 994
Merit: 1000



View Profile
February 21, 2013, 04:11:26 PM
 #304

The full block download and verification isn't needed to start hashing the next block?
The idea is to only submit the transaction hashes which go into the merkel tree, instead of the transaction data. Because it is likely that you already received and validated the transaction, before you receive a block containing it. This technique removes redundancy from the communication between the node and the network and significantly reduces the time to propagate a valid block.

The ASICMINER Project https://bitcointalk.org/index.php?topic=99497.0
"The way you solve things is by making it politically profitable for the wrong people to do the right thing.", Milton Friedman
wtfvanity
Hero Member
*****
Offline Offline

Activity: 504
Merit: 500


WTF???


View Profile
February 21, 2013, 04:13:57 PM
 #305

The full block download and verification isn't needed to start hashing the next block?
The idea is to only submit the transaction hashes which go into the merkel tree, instead of the transaction data. Because it is likely that you already received and validated the transaction, before you receive a block containing it. This technique removes redundancy from the communication between the node and the network and significantly reduces the time to propagate a valid block.

But is not currently how the Satoshi client operates, right? I know we don't have too many people running stock software and huge pools.

          WTF!     Don't Click Here              
          .      .            .            .        .            .            .          .        .     .               .            .             .            .            .           .            .     .               .         .              .           .            .            .            .     .      .     .    .     .          .            .          .            .            .           .              .     .            .            .           .            .               .         .            .     .            .            .             .            .              .            .            .      .            .            .            .            .            .            .             .          .
jgarzik
Legendary
*
qt
Offline Offline

Activity: 1596
Merit: 1100


View Profile
February 21, 2013, 04:16:39 PM
 #306

But is not currently how the Satoshi client operates, right? I know we don't have too many people running stock software and huge pools.

Several larger pools are running 0.8 or almost-0.8.  Largely stock software (with maybe a patch to filter out SatoshiDICE transactions here and there).


Jeff Garzik, Bloq CEO, former bitcoin core dev team; opinions are my own.
Visit bloq.com / metronome.io
Donations / tip jar: 1BrufViLKnSWtuWGkryPsKsxonV2NQ7Tcj
wtfvanity
Hero Member
*****
Offline Offline

Activity: 504
Merit: 500


WTF???


View Profile
February 21, 2013, 04:42:16 PM
 #307

Several larger pools are running 0.8 or almost-0.8.  Largely stock software (with maybe a patch to filter out SatoshiDICE transactions here and there).


Hmmm, not really the answer to my question. When a block is found, don't you have to download the whole block to see which transactions they included so that you can build the merkle tree?

The removal of redundancy that Jutarul mentioned, is that how 0.8 works?

          WTF!     Don't Click Here              
          .      .            .            .        .            .            .          .        .     .               .            .             .            .            .           .            .     .               .         .              .           .            .            .            .     .      .     .    .     .          .            .          .            .            .           .              .     .            .            .           .            .               .         .            .     .            .            .             .            .              .            .            .      .            .            .            .            .            .            .             .          .
MoonShadow
Legendary
*
Offline Offline

Activity: 1708
Merit: 1010



View Profile
February 21, 2013, 05:24:44 PM
 #308

Several larger pools are running 0.8 or almost-0.8.  Largely stock software (with maybe a patch to filter out SatoshiDICE transactions here and there).


Hmmm, not really the answer to my question. When a block is found, don't you have to download the whole block to see which transactions they included so that you can build the merkle tree?


No, as transactions are uniquely identifiable by their hash.  The block report need only contain the block header and the murkle tree of hashes.

Quote

The removal of redundancy that Jutarul mentioned, is that how 0.8 works?

No, but mostly because it was much simplier and more reliable to treat the block as a single data object.  Using this reduced block report to save bandwidth & propagation time has been considered for a long time, but it's not an easy fix.  It requires professionals like Gavin to make it work on the testnet.  Satoshi was not an experienced programmer.

"The powers of financial capitalism had another far-reaching aim, nothing less than to create a world system of financial control in private hands able to dominate the political system of each country and the economy of the world as a whole. This system was to be controlled in a feudalist fashion by the central banks of the world acting in concert, by secret agreements arrived at in frequent meetings and conferences. The apex of the systems was to be the Bank for International Settlements in Basel, Switzerland, a private bank owned and controlled by the world's central banks which were themselves private corporations. Each central bank...sought to dominate its government by its ability to control Treasury loans, to manipulate foreign exchanges, to influence the level of economic activity in the country, and to influence cooperative politicians by subsequent economic rewards in the business world."

- Carroll Quigley, CFR member, mentor to Bill Clinton, from 'Tragedy And Hope'
Nagato
Full Member
***
Offline Offline

Activity: 150
Merit: 100



View Profile WWW
February 21, 2013, 05:43:34 PM
 #309

If we want to cap the time of downloading overhead the latest block to say 1%, we need to be able to download the MAX_BLOCKSIZE within 6 seconds on average so that we can spend 99% time hashing.

At 1MB, you would need a ~1.7Mbps  connection to keep downloading time to 6s.
At 10MB, 17Mbps
At 100MB, 170Mbps

and you start to see why even 100MB block size would render 90% of the world population unable to participate in mining.
Even at 10MB, it requires investing in a relatively high speed connection.

Also of importance is the fact that local bandwidth and international bandwidths can wary by large amounts. A 1Gbps connection in Singapore(http://www.starhub.com/broadband/plan/maxinfinitysupreme.html) only gives you 100Mbps international bandwidth meaning you only have 100Mbps available for receiving mining blocks.

Since a couple people have thanked the author for posting this, I thought I should mention that only transaction hashes need to be sent in bursts.  So a block of 1000 transactions (roughly 1MB) only requires 30KB of data to be sent in a burst, requiring a ~43Kbps connection to keep downloading time to 6s.  100MB blocks require ~4.3Mbps.  The continuous downloading of transaction data is below these limits.

Which i did mention in my next few posts.

Satoshi was not an experienced programmer.
Are you freaking kidding me? He programmed the entire Bitcoin client with all protocol rules/scripting/validation working almost bug free from day 1. 4 years later with a market cap of $300 million, i don't even know of 1 other full client considering they have the full source code of the Satoshi client to refer to. That was a feat for a single person!
And you think he would not be able to implement something as simple as changing the block format to only contain hashes?

MoonShadow
Legendary
*
Offline Offline

Activity: 1708
Merit: 1010



View Profile
February 21, 2013, 05:53:30 PM
 #310

If we want to cap the time of downloading overhead the latest block to say 1%, we need to be able to download the MAX_BLOCKSIZE within 6 seconds on average so that we can spend 99% time hashing.

At 1MB, you would need a ~1.7Mbps  connection to keep downloading time to 6s.
At 10MB, 17Mbps
At 100MB, 170Mbps

and you start to see why even 100MB block size would render 90% of the world population unable to participate in mining.
Even at 10MB, it requires investing in a relatively high speed connection.

Also of importance is the fact that local bandwidth and international bandwidths can wary by large amounts. A 1Gbps connection in Singapore(http://www.starhub.com/broadband/plan/maxinfinitysupreme.html) only gives you 100Mbps international bandwidth meaning you only have 100Mbps available for receiving mining blocks.

Since a couple people have thanked the author for posting this, I thought I should mention that only transaction hashes need to be sent in bursts.  So a block of 1000 transactions (roughly 1MB) only requires 30KB of data to be sent in a burst, requiring a ~43Kbps connection to keep downloading time to 6s.  100MB blocks require ~4.3Mbps.  The continuous downloading of transaction data is below these limits.

Which i did mention in my next few posts.

Satoshi was not an experienced programmer.
Are you freaking kidding me? He programmed the entire Bitcoin client with all protocol rules/scripting/validation working almost bug free from day 1. 4 years later with a market cap of $300 million, i don't even know of 1 other full client considering they have the full source code of the Satoshi client to refer to. That was a feat for a single person!
And you think he would not be able to implement something as simple as changing the block format to only contain hashes?

Satoshi genius was in his unique ability to see the big picture & predict the problems.  Programming was not likely a professional skill for him.  Ask Gavin about that.  Satoshi was more likely a professional in the field of economics, perhaps a professor but not likely, since Austrian economic theory doesn't tend to get much academic respect.  He also had a great understanding of cryptographic theories, so he likely had a strong mathmatics background, but he didn't use any novel crypto, he just used them in a novel way.  Satoshi deserves respect for what he started, but the current vanilla client is mostly not Satoshi's code.  And there were bugs, I was there.

"The powers of financial capitalism had another far-reaching aim, nothing less than to create a world system of financial control in private hands able to dominate the political system of each country and the economy of the world as a whole. This system was to be controlled in a feudalist fashion by the central banks of the world acting in concert, by secret agreements arrived at in frequent meetings and conferences. The apex of the systems was to be the Bank for International Settlements in Basel, Switzerland, a private bank owned and controlled by the world's central banks which were themselves private corporations. Each central bank...sought to dominate its government by its ability to control Treasury loans, to manipulate foreign exchanges, to influence the level of economic activity in the country, and to influence cooperative politicians by subsequent economic rewards in the business world."

- Carroll Quigley, CFR member, mentor to Bill Clinton, from 'Tragedy And Hope'
bullioner
Full Member
***
Offline Offline

Activity: 166
Merit: 101


View Profile
February 21, 2013, 06:38:51 PM
 #311

Interesting to see the various proposals for an adaptive protocol level maximum block size.

It seems clear that adaption should occur based on transaction fees, since they are supposed to take over as the main incentive for securing the transaction log once initial distribution winds down further.  This means that this is the closest so far to achieving an equilibrium based on incentives which optimise for the properties I, as a bitcoin user, want: https://bitcointalk.org/index.php?topic=140233.msg1507328#msg1507328.  That is: first and foremost I want the (transaction log of the) network to be really well secured.  Once that is achieved, I want more transactions to be possible, so long as doing so doesn't destroy incentives for those securing the network.

That said, I think the proposed rate is too high.  We need to budget what *transactors* in the system should need to pay in order to ensure robust security of the transaction log, and not step too far over that level when designing the equilibrium point.  50 BTC per block works out at 12.5% of the monetary base per annum once all coins are created.  This seems excessive, though admittedly it is what *holders* of bitcoins are currently paying via the inflation schedule. 

Although it is difficult to estimate, the level of transaction fees required, long term, to maintain the security of the transaction log, should be the starting point when designing the equilibrium via which an adaptive maximum block size will be set (assuming one doesn't buy Mike's optimism about those incentives being solved by other means).

Suppose the system needs 3% of the monetary base to be spent per annum on securing the transaction log.  Then, in the long term, that works out at pretty much 12 BTC per block.  Could just call it 12 BTC per block from the start to keep it simple.  So once the scheme is in place and max block size is still 1 MiB, the mean transaction fee over the last N blocks will need to 0.0029 BTC to provoke an increase in max block size.  That seems pretty doable via market forces.  Then, block size increases, and mean transaction fee decreases, but total transaction fees remain around the same, until an equilibrium is reached where either block space is no longer scarce, or enough miners, for other reasons, decide to limit transaction rate softly.

So my question is: apart from waving fingers in the air, are there any good ways to estimate what percentage of the monetary base should be spent by users of the system as a whole, per annum, in order to adequately ensure security of the transaction log?  It's really a risk management question.  As is most of the rest of the design of Bitcoin.
misterbigg
Legendary
*
Offline Offline

Activity: 1064
Merit: 1001



View Profile
February 21, 2013, 07:17:30 PM
 #312

50 BTC per block works out at 12.5% of the monetary base per annum once all coins are created.

Expressing the cost of fees as a percentage of the monetary base is a nice way to quantify the cost of fees. Although I should point out that all of the baked-in constants in my proposal are examples and subject to tuning before implementation.

Quote
the level of transaction fees required...to maintain the security of the transaction log, should be the starting point when designing the equilibrium via which an adaptive maximum block size will be set

I agree, and like you said it can be difficult to measure this. All else being equal, security is proportional to fees. The amount of security we want is of course "as much as possible." Therefore, fees need to be maximized.

In another thread, Akka wrote:

Every BTC User has a point where he isn't willing to pay even more fees to get a transaction processed. Once this point is reached it will be more providable to allow more transactions.

Some points regarding fees and block size:

1. If the block size is too large, fees will drop from an absence of scarcity
2. If the block size is too small, users at the margins will leave the system (fees too high)
3. Smaller block sizes are preferred to larger ones (more independent miners possible)

The ideal block size is the smallest block size that drives fees up to the threshold of what users are willing to pay.

How do we determine what users are willing to pay?

Quote
So my question is: apart from waving fingers in the air, are there any good ways to estimate what percentage of the monetary base should be spent by users of the system as a whole, per annum, in order to adequately ensure security of the transaction log?

It's a difficult question.

bullioner
Full Member
***
Offline Offline

Activity: 166
Merit: 101


View Profile
February 21, 2013, 07:26:12 PM
 #313

50 BTC per block works out at 12.5% of the monetary base per annum once all coins are created.

Expressing the cost of fees as a percentage of the monetary base is a nice way to quantify the cost of fees. Although I should point out that all of the baked-in constants in my proposal are examples and subject to tuning before implementation.

Quote
the level of transaction fees required...to maintain the security of the transaction log, should be the starting point when designing the equilibrium via which an adaptive maximum block size will be set

I agree, and like you said it can be difficult to measure this. All else being equal, security is proportional to fees. The amount of security we want is of course "as much as possible." Therefore, fees need to be maximized.

In another thread, Akka wrote:

Every BTC User has a point where he isn't willing to pay even more fees to get a transaction processed. Once this point is reached it will be more providable to allow more transactions.

Some points regarding fees and block size:

1. If the block size is too large, fees will drop from an absence of scarcity
2. If the block size is too small, users at the margins will leave the system (fees too high)
3. Smaller block sizes are preferred to larger ones (more independent miners possible)

The ideal block size is the smallest block size that drives fees up to the threshold of what users are willing to pay.

How do we determine what users are willing to pay?

Quote
So my question is: apart from waving fingers in the air, are there any good ways to estimate what percentage of the monetary base should be spent by users of the system as a whole, per annum, in order to adequately ensure security of the transaction log?

It's a difficult question.

Glad we are thinking along the same lines, though.  So 3% p.a. (12 BTC per block) it is then!  (joking)
wtfvanity
Hero Member
*****
Offline Offline

Activity: 504
Merit: 500


WTF???


View Profile
February 21, 2013, 07:54:23 PM
 #314

Someone please check my math though, I'll update it on just the 10 meg block.

At 0.0005 minimum transaction fee, on blockchain.info I'm seeing about 0.5 BTC per 250 KB of block size. That would be an additional 20 BTC per block with a fully loaded 10 meg block.

And that would be at the current minimum transaction fees and based on what current blocks look like.

          WTF!     Don't Click Here              
          .      .            .            .        .            .            .          .        .     .               .            .             .            .            .           .            .     .               .         .              .           .            .            .            .     .      .     .    .     .          .            .          .            .            .           .              .     .            .            .           .            .               .         .            .     .            .            .             .            .              .            .            .      .            .            .            .            .            .            .             .          .
TalkingAntColony
Member
**
Offline Offline

Activity: 62
Merit: 10


View Profile
February 21, 2013, 08:15:31 PM
 #315

If the block size limit is reached (for the average block), miners will implement algorithms to select transactions to include so as to maximize the fee collected. This will drive up the cost of fees as people compete to have their transactions included in a timely manner. Let's keep bitcoin the cheapest transaction processor around by avoiding such a scenario!
Timo Y
Legendary
*
Offline Offline

Activity: 938
Merit: 1001


bitcoin - the aerogel of money


View Profile
February 21, 2013, 08:39:31 PM
Last edit: February 22, 2013, 01:56:31 PM by Timo Y
 #316

[...]
Yes, there will likely only be around 10 billion people on the planet, but that's a hell of a lot of transactions. At one transaction per person per day you've got 115,700 transactions per second. Sorry, but there are lots of reasons to think Moore's law is coming to an end, and in any case the issue I'm most worried about is network scaling, and network scaling doesn't even follow Moore's law.

Making design decisions assuming technology is going to keep getting exponentially better is a huge risk when transistors are already only a few orders of magnitude away from being single atoms.
[...]

Moore's Law predicts the performance of a single CPU.  In terms of bitcoin scalability, it doesn't matter how fast a single CPU  will be 10 years' form now.  

Rather, the crucial measure is cost per CPU calculation.  Transistors may be reaching physical limits, but in terms of producing CPUs more cheaply (and thus in greater quantity), and in terms of energy efficiency, there are still many orders of magnitude of growth potential.

There is no reason why a bitcoin node can't run on a multiprocessor machine.

GPG ID: FA868D77   bitcoin-otc:forever-d
cjp
Full Member
***
Offline Offline

Activity: 210
Merit: 124



View Profile WWW
February 21, 2013, 08:42:06 PM
 #317

My brain bandwidth (averaged over the week) is low enough to make it difficult for me to mine a new comment in this forum chain  Roll Eyes
Luckily, most comments I wanted to make were already made by others.

My current "big picture" opinion on this matter (subject to change):
The current limit should be considered an opportunity rather than a threat: we can observe the effect of a hard limit before Bitcoin becomes really big. I don't think we will hit it really hard: the soft limit and the existence of "frivolous" transactions act as a buffer. The best way to prepare for the hard limit in the short term is:
  • Add functionality to Bitcoin clients to guide the user in navigating the "transaction market" (choosing an appropriate fee for his transaction). This can be based on recently observed transaction fees in the block chain.
  • Inform people that this is going to happen, and what the effects might be. We were all informed of the reward halving, and because of that, the event itself didn't cause any panic.
Sure, the current limit will slow down the Bitcoin economy, but to be honest, it is currently quite over-heated: in mainstream economies, >10%/yr is considered overheated, but Bitcoin adoption is currently even faster than Moore's law! We need time to develop secure off-blockchain payment systems based on Bitcoin (I'm working on this), to wait for people to have faster Internet and bigger hard drives and to reach consensus about the best block size limit method. Once transaction fees become painfully high and bigger blocks are no longer a problem for "non-commercial" owners of full nodes, the "hard fork" needs to be performed. Naturally, the "hard fork" decision needs to take place before "real problems" occur.

I am in favor of automatically adjusting the block size limit, if the adjustment method meets the following conditions:
  • The block size limit remains sufficiently low to allow funding of an "unbreakable" difficulty level through transaction fees. "Unbreakable" means at least that beating the difficulty level is more costly that it will be profitable, for any organization (incl. governments).
  • The block size limit remains sufficiently low to allow an average person to set up a full node with affordable investments. "Affordable" is such that some people will do this out of altruism.
  • The block size limit remains sufficiently high to allow "large" personal transactions (e.g. >$1000) to take place with acceptable fee levels.
  • It is not possible for miners or pool operators to perform the type of attack as described by the OP, either deliberately or not deliberately
  • The adjusting method provides a significant advantage over a (more simple) constant value, e.g. it auto-adjusts to average hardware improvements

If no such method is found, I am in favor of increasing the block size limit to a higher constant value, as long as it meets the following conditions:
  • The block size limit is sufficiently low to allow funding of an "unbreakable" difficulty level through transaction fees. "Unbreakable" means at least that beating the difficulty level is more costly that it will be profitable, for any organization (incl. governments).
  • The block size limit is sufficiently low to allow an average person to set up a full node with affordable investments. "Affordable" is such that some people will do this out of altruism.
  • The block size limit is sufficiently low to make it impossible for miners or pool operators to perform the type of attack as described by the OP, either deliberately or not deliberately
  • The block size limit is sufficiently high to allow "large" personal transactions (e.g. >$1000) to take place with acceptable fee levels.
  • Mining is likely to remain decentralized to a high degree, to avoid a "single point of control"

If no such level is possible, I'm not sure what to do. I would mean having to decide between two different kinds of centralization: either mining centralization, or payment processor centralization. I think we'd have to choose the one which will be the most reversable one, so that it can be reversed once hardware capabilities improve.

Donate to: 1KNgGhVJx4yKupWicMenyg6SLoS68nA6S8
http://cornwarecjp.github.io/amiko-pay/
notig
Sr. Member
****
Offline Offline

Activity: 294
Merit: 250


View Profile
February 21, 2013, 08:51:57 PM
 #318


  • The block size limit is sufficiently high to allow "large" personal transactions (e.g. >$1000) to take place with acceptable fee levels.


Correct me if I'm wrong (unsure) but the bitcoin network doesn't "care" how much money you are moving. What it cares about are the amount of inputs. So it's possible to have a large amount of money with fewer inputs.. and a small amount of money with more inputs. So these "acceptable levels" are going to be for small transactions too. Which would as I said earlier...... means that bitcoin will probably fail with a hard low limit. Because if you are designing the network to have acceptable fee levels for 1k then it will also apply to smaller transactions. And those transactions and people will seek a new cryptocurrency.
paraipan
In memoriam
Legendary
*
Offline Offline

Activity: 924
Merit: 1004


Firstbits: 1pirata


View Profile WWW
February 21, 2013, 08:55:32 PM
 #319

If the block size limit is reached (for the average block), miners will implement algorithms to select transactions to include so as to maximize the fee collected. This will drive up the cost of fees as people compete to have their transactions included in a timely manner. Let's keep bitcoin the cheapest transaction processor around by avoiding such a scenario!

Let's not! If security needs to be paid markets will understand and self adjust. Why take it over limit and be sure small time merchant and miners can't keep up with bandwidth and storage requirements. It will be feasible in a few years though.

BTCitcoin: An Idea Worth Saving - Q&A with bitcoins on rugatu.com - Check my rep
notig
Sr. Member
****
Offline Offline

Activity: 294
Merit: 250


View Profile
February 21, 2013, 09:08:29 PM
Last edit: February 21, 2013, 11:32:04 PM by notig
 #320

If the block size limit is reached (for the average block), miners will implement algorithms to select transactions to include so as to maximize the fee collected. This will drive up the cost of fees as people compete to have their transactions included in a timely manner. Let's keep bitcoin the cheapest transaction processor around by avoiding such a scenario!

Let's not! If security needs to be paid markets will understand and self adjust. Why take it over limit and be sure small time merchant and miners can't keep up with bandwidth and storage requirements. It will be feasible in a few years though.

Lets.

Already controversy is brewing... Already businesses are starting to back away from bitcoin because if the block limit isn't raised then one of three things will happen: 1. Bitcoin fails. 2. Bitcoin gets used only for moving large amounts of money and other cryptocurrencies take over eventually displacing bitcoin itself. 3. Bitcoin gets used for only moving large amounts of money and "bitcoin clearing houses" fill the gaps, which increase the risk of fraud/theft/unaccountability, add avenues of attack, and form REAL centralization. Not some hypothetical BS.
Pages: « 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 [16] 17 18 19 20 21 22 23 24 25 26 »
  Print  
 
Jump to:  

Powered by MySQL Powered by PHP Powered by SMF 1.1.19 | SMF © 2006-2009, Simple Machines Valid XHTML 1.0! Valid CSS!