Bitcoin Forum
May 09, 2024, 03:14:25 PM *
News: Latest Bitcoin Core release: 27.0 [Torrent]
 
   Home   Help Search Login Register More  
Pages: « 1 [2] 3 »  All
  Print  
Author Topic: Share your ideas on what to replace the 1 MB block size limit with  (Read 6958 times)
amincd (OP)
Hero Member
*****
Offline Offline

Activity: 772
Merit: 501


View Profile
July 31, 2014, 05:48:34 PM
Last edit: July 31, 2014, 06:04:57 PM by amincd
 #21

It is perfectly possible to transact in Bitcoin without using the blockchain for every transaction (in a decenteralized way too— though for buying sodapop federated solutions may be much less costly).

How can you transact in Bitcoin in a decentralized way without the blockchain? Rapidly adjusted payment channels need aggregators (centralized parties) if you want to use them to transact with parties who you have no previous or ongoing relationship with (e.g. a random vending machine), and require locking up your bitcoin for a period.

Quote
Yep, though note that that post was also from before the million byte limit was added to Bitcoin, along with many other protections against lost of decentralization and against denial of service... it's an argument to the general feasibility of this class of approach, and indeed— it's fine. We're still not yet to a point where sending 100GB/day is "not a big deal", nor is demand for Bitcoin transactions anything like that (and, arguably if Bitcoin currently required 100GB/day now we never would reach that level of demand— because such a costly system at this point would be completely centralized and inferior to traditional banks and Visa).

I agree entirely. We're not at the point where Bitcoin should be handling 4000 tps. I'm just making a case for not sticking with the 1 MB block size limit, and putting in place a mechanism where, overtime, it can (automatically) scale to that volume.

Quote
Visa's 2008 transaction volume is also a long way from handling the total transaction volume of the worlds small cash transactions, as you seemed to be envisioning— yet Bitcoin _can_ accommodate that, but not if you continue to believe you can shove all the worlds transactions constantly in to a single global broadcast network.

That is true. Ultimately, the Bitcoin blockchain cannot handle the total volume of the world's small cash transactions. I think we can cross that bridge when we get there though. Getting to 4,000 tps would radically transform the world's financial system, and inject a massive amount of capital and manpower into the Bitcoin community, making it easier for new solutions (e.g. sidechains) to be developed.
1715267665
Hero Member
*
Offline Offline

Posts: 1715267665

View Profile Personal Message (Offline)

Ignore
1715267665
Reply with quote  #2

1715267665
Report to moderator
1715267665
Hero Member
*
Offline Offline

Posts: 1715267665

View Profile Personal Message (Offline)

Ignore
1715267665
Reply with quote  #2

1715267665
Report to moderator
"Bitcoin: the cutting edge of begging technology." -- Giraffe.BTC
Advertised sites are not endorsed by the Bitcoin Forum. They may be unsafe, untrustworthy, or illegal in your jurisdiction.
1715267665
Hero Member
*
Offline Offline

Posts: 1715267665

View Profile Personal Message (Offline)

Ignore
1715267665
Reply with quote  #2

1715267665
Report to moderator
solex
Legendary
*
Offline Offline

Activity: 1078
Merit: 1002


100 satoshis -> ISO code


View Profile
August 01, 2014, 01:58:34 AM
Last edit: August 01, 2014, 04:18:23 AM by solex
 #22

gmaxwell makes clear that this subject has been debated in many threads. It keeps getting raised, and the reason is that 18 months have passed since Jeweller's thread, and there is probably less than 18 months until the block size limit starts crippling transaction flows, just as the 250KB soft-limit did early in March 2013.

Sadly there is still no proposal that I've seen which really closes the loop between user capabilities (esp bandwidth handling, as bandwidth appears to be the 'slowest' of the technology to improve), at best I've seen applying some additional local exponential cap based on historical bandwidth numbers, which seems very shaky since small parameter changes can make the difference between too constrained and completely unconstrained easily.  The best proxy I've seen for user choice is protocol rule limits, but those are overly brittle and hard to change.

Satoshi put the 1MB into place nearly 4 years ago, mainly as an anti-spam measure. Now the block limit exists, at the very minimum it should be increasing at the same rate as the average global internet broadband speed.

UK consumer broadband (download) average speed each year


It is a reasonable assumption that all major countries which host bitcoin nodes have seen a similar growth pattern, and upload speeds also follow the pattern.
So, since the 1MB max block size was acceptable, within the goal of maintaining decentralization, in 2010, then 3MB must be acceptable today.

Large blocks are already being created, as a matter of course, by different miners:

313377 298 3,178.83 BTC 5.9.24.81 731.56
313376 1230 3,322.72 BTC Eligius 877.88
313375 1447 2,434.47 BTC Unknown with 1AcAj9p Address 731.47
313374 1897 5,733.43 BTC GHash.IO 731.61

The bare minimum which needs doing is like:
if block height > 330000
   maxblocksize = 3MB [and recalculate dependent variables]

Or better still a flexible limit based upon demand. Remember people are paying for their transactions to be processed:

The median size of the a set of the previous blocks.
A set of 2016 blocks is a large number, representative of real bitcoin usage, so a flexible limit determined at each difficulty change makes sense.
The fees market (which is still dsyfunctional) is a lesser concern at the present time.

Bitcoin Core version 0.8 focused on LevelDB, 0.9 on Payment protocol. Version 0.10 really needs to address the block size.

It is crazy to allow the scenario (below) to happen over the 1MB constant when all nodes, not just miners, would be affected:

By default Bitcoin will not created blocks larger than 250kb even though it could do so without a hard fork. We have now reached this limit. Transactions are stacking up in the memory pool and not getting cleared fast enough.

What this means is, you need to take a decision and do one of these things:

  • Start your node with the -blockmaxsize flag set to something higher than 250kb, for example -blockmaxsize=1023000. This will mean you create larger blocks that confirm more transactions. You can also adjust the size of the area in your blocks that is reserved for free transactions with the -blockprioritysize flag.
  • Change your nodes code to de-prioritize or ignore transactions you don't care about, for example, Luke-Jr excludes SatoshiDice transactions which makes way for other users.
  • Do nothing.

If everyone does nothing, then people will start having to attach higher and higher fees to get into blocks until Bitcoin fees end up being uncompetitive with competing services like PayPal.

If you mine on a pool, ask your pool operator what their policy will be on this, and if you don't like it, switch to a different pool.

ArticMine
Legendary
*
Offline Offline

Activity: 2282
Merit: 1050


Monero Core Team


View Profile
August 03, 2014, 01:48:38 AM
 #23

Here is an excellent graphic on when we would likely reach the 1 MB Block limit. https://bitcointalk.org/index.php?topic=400235.msg8153516#msg8153516. A reasonable prediction is within 12 months, likely during the next major price move.

Concerned that blockchain bloat will lead to centralization? Storing less than 4 GB of data once required the budget of a superpower and a warehouse full of punched cards. https://upload.wikimedia.org/wikipedia/commons/8/87/IBM_card_storage.NARA.jpg https://en.wikipedia.org/wiki/Punched_card
ABISprotocol
Sr. Member
****
Offline Offline

Activity: 278
Merit: 251

ABISprotocol on Gist


View Profile WWW
August 21, 2014, 06:55:29 AM
Last edit: August 21, 2014, 07:26:19 AM by ABISprotocol
 #24

but in my opinion that's a feature compensating for cheaper future storage and processing resources
Storage in the (not so far distant future) will not be free. Talk about "programmed destruction" yikes. What the bytecoin stuff does reduces all the generated coin, including subsidy— what you're suggesting really is a duplicate of it, but less completely considered, please check out the bytecoin whitepaper. I suppose that leaving _out_ the fees at least avoids the bad incentive trap. It's still broken, none the less, and you really can't wave your hands and ignore the fact that subsidy will be pretty small in only a few years... esp with the same approach being apparently ineffective in bytecoin and monero when their subsidy is currently quite large.

I just read this whole thread and found it very interesting. After reading it, I decided to go back and re-read this:

Output Distribution Obfuscation (posted July 16, 2014), by Gregory Maxwell and Andrew Poelstra. (Involves use of cryptonote-based bytecoin (BCN) ring signatures, described as a possibility for bitcoin: "Using Bytecoin Ring Signatures (BRS), described at cryptonote.org, it is possible to disguise which of utxos is being referenced in any given transaction input. This is done by simply referencing all the utxos, then ringsigning with a combination of all their associated pubkeys."
http://download.wpsoftware.net/bitcoin/wizardry/brs-arbitrary-output-sizes.txt
(This, in part, proposes "an output-encoding scheme by which outputs of *every possible size* are created alongside each real output (...) further requir(ing) that the "ghost outputs" are indistinguishable from the real ones to anyone not in possession of the outputs' associated private key. (With ring signatures hiding the exact outputs which are being spent, even spending a real output will not identify it as real.)" In this scenario, ghost outputs are chosen randomly, and users improve anonymity when selecting ghost outputs " by trying to reuse n for any given P, V."

(Background to this:)
(...)the bytecoin ring signature is pretty straight forward to add to Bitcoin— though it implies a pretty considerable scalability tradeoff. Andytoshi and I have come up with some pretty substantial cryptographic improvements, e.g. https://download.wpsoftware.net/bitcoin/wizardry/brs-arbitrary-output-sizes.txt

So, my questions:

How would this output-encoding scheme work realistically for something of *every possible size?*   And assuming this were applied to bitcoin as an option [much as SharedCoin is in blockchain.info], wouldn't it still come at a cost both in terms  of size of the data corresponding to whatever transactions involved the scheme in the cases where users choose to utilize it, as well as corresponding additional fee(s)?  How are the scalability issue(s) addressed?  (Please also explain from both the scripting vs. no-scripting scenarios.)

ABISprotocol (Github/Gist)
http://abis.io
Gavin Andresen
Legendary
*
qt
Offline Offline

Activity: 1652
Merit: 2216


Chief Scientist


View Profile WWW
August 21, 2014, 02:27:35 PM
 #25

Quote
Edit: With respect to CryptoNote and Monero, I do see merit in the argument that the fee penalty alone may not be enough to constrain blocksize; however when combined with the difficulty increase requirement the picture changes. As for the specifics of the Monero blockchain there are also other factors including dust from mining pools that led to the early bloating, and we must also keep in mind the CryptoNote and Monero also have built in privacy which adds to the blocksize by its very nature.
Yes, its complicated— but it's also concerning. The only altcoins that I'm aware of which have diverged from the current Bitcoin behavior in this front, and did so with the benefits of all the discussions we've had before being available to them, have been suffering from crippling bloat.

Glancing at block explorers for Monero and ByteCoin... I'm not seeing crippling bloat right now. I see lots of very-few-transactions blocks.

Glancing at recent release notes for ByteCoin, it looks like transactions were not being prioritized by fee, which is a fundamental to getting a working fee market.

Have Monero and ByteCoin fixed the bloat problem, or did the transaction spammers just get bored and go away?


How often do you get the chance to work on a potentially world-changing project?
ABISprotocol
Sr. Member
****
Offline Offline

Activity: 278
Merit: 251

ABISprotocol on Gist


View Profile WWW
August 21, 2014, 05:34:53 PM
 #26

Quote
Edit: With respect to CryptoNote and Monero, I do see merit in the argument that the fee penalty alone may not be enough to constrain blocksize; however when combined with the difficulty increase requirement the picture changes. As for the specifics of the Monero blockchain there are also other factors including dust from mining pools that led to the early bloating, and we must also keep in mind the CryptoNote and Monero also have built in privacy which adds to the blocksize by its very nature.
Yes, its complicated— but it's also concerning. The only altcoins that I'm aware of which have diverged from the current Bitcoin behavior in this front, and did so with the benefits of all the discussions we've had before being available to them, have been suffering from crippling bloat.

Glancing at block explorers for Monero and ByteCoin... I'm not seeing crippling bloat right now. I see lots of very-few-transactions blocks.

Glancing at recent release notes for ByteCoin, it looks like transactions were not being prioritized by fee, which is a fundamental to getting a working fee market.

Have Monero and ByteCoin fixed the bloat problem, or did the transaction spammers just get bored and go away?

Not sure about whether bloat has been addressed to the extent that it would need to be in Bytecoin (BCN), but I do know that recent updates dropped the default fee from 10 BCN to .01 BCN (1000 times cheaper), but the updates also provide that the user specifies what the fee will be so that the higher the fee, the faster the transaction makes it in, like this:

In this example, I've shown the general format for the transfer command and I've shown the -f (fee) as 10 bytecoin:

Code:
transfer <mixin_count> <address> <amount> [-p payment_id] [-f fee] 
Code:
transfer 10 27sfd....kHfjnW 10000 -p cfrsgE...fdss -f 10

My understanding is that gmaxwell and andytoshi (et. al.?) have come up with "substantial cryptographic improvements" to the BCN system which potentially are a "pretty straight forward to add to Bitcoin" as per gmaxwell, see:  https://download.wpsoftware.net/bitcoin/wizardry/brs-arbitrary-output-sizes.txt and previous comment(s) cited in this thread.  However, I still have my (unanswered) questions, to wit:

How would this output-encoding scheme work realistically for something of *every possible size?*  

Assuming this were applied to bitcoin as an option [much as SharedCoin is in blockchain.info], wouldn't it still come at a cost both in terms of size of the data corresponding to whatever transactions involved the scheme in the cases where users choose to utilize it, as well as corresponding additional fee(s)?  

How are the scalability issue(s) addressed?

ABISprotocol (Github/Gist)
http://abis.io
jl2012
Legendary
*
Offline Offline

Activity: 1792
Merit: 1097


View Profile
August 21, 2014, 06:05:28 PM
 #27

1MB-block supporters have 2 major arguments: decentralization and block space scarcity. By considering ONLY these 2 factors, however, the BEST solution is to limit a block to only 2 transactions: a reward transaction and a normal transaction. This will limit the block size to absolute minimal, and make sure everyone could mine with a 9.6k modem and a 80386 computer

The truth is that the 1MB limit was just an arbitrary choice by Satoshi without any considering its implications carefully (at least not well documented). He chose "1" simply because it's the first natural number. Had he chosen 2MB instead of 1MB, I am pretty sure that Bitcoin would have worked in exactly the same way as how it works now. Had he chosen 0.5MB, we might have already run into a big trouble.

We want to maximize miner profit because that will translate to security. However, a block size limit does not directly translate to maximized miner profit. Consider the most extreme "2 transactions block limit", that will crush the value of Bitcoin to zero and no one will mine for it. We need to find a reasonable balance but 1MB is definitely not a good one. Assume that we aim at paying $1 million/block ($52 billion/year) to secure the network (I consider this as a small amount if Bitcoin ever grows to a trillion market cap). The current 7tps llimit will require a fee of $238/tx, which is way too expensive even for a global settlement network among banks.

To answer the question of "what to replace the 1 MB block size limit with", we first need to set a realistic goal for Bitcoin. In long term, I don't think bitcoin could/should be used for buying a cup of coffee. To be competitive with VISA, the wiki quotes 2000tps, or 1200000tx/block, or 586MB/block (assuming 512bytes/tx). To be competitive with SWIFT, which has about 20 million translations per day, it takes 232tps, or 138889tx/block, or 68MB/block. Divide them by the $1 million fee/block, that would be $0.83/tx and $7.2/tx respectively. A fix rate of $7.2/tx is a bit high but still (very) competitive with wire transfer. $0.83/tx is very competitive for transactions over $100 of value. I think a reasonable choice, with the implications for centralization considered, would be around 100MB/block. That takes 1.5Mb/s of bandwidth in a perfect scenario. That would be a better equilibrium in technical and economical terms.

Donation address: 374iXxS4BuqFHsEwwxUuH3nvJ69Y7Hqur3 (Bitcoin ONLY)
LRDGENPLYrcTRssGoZrsCT1hngaH3BVkM4 (LTC)
PGP: D3CC 1772 8600 5BB8 FF67 3294 C524 2A1A B393 6517
mmeijeri
Hero Member
*****
Offline Offline

Activity: 714
Merit: 500

Martijn Meijering


View Profile
August 21, 2014, 06:29:10 PM
 #28

We want to maximize miner profit because that will translate to security.

That step in the argument needs more work. Security isn't the only consideration, nor does it obviously trump all others. At some point the incremental value of additional security might not be worth it.

ROI is not a verb, the term you're looking for is 'to break even'.
mmeijeri
Hero Member
*****
Offline Offline

Activity: 714
Merit: 500

Martijn Meijering


View Profile
August 21, 2014, 06:31:30 PM
 #29

We need to find a reasonable balance but 1MB is definitely not a good one.

Blocks of 1MB combined with tree-chains could turn out to be a perfectly adequate solution. The trade space is larger than just changing the block size.

ROI is not a verb, the term you're looking for is 'to break even'.
gmaxwell
Moderator
Legendary
*
expert
Offline Offline

Activity: 4172
Merit: 8419



View Profile WWW
August 21, 2014, 08:17:06 PM
 #30

1MB-block supporters have 2 major arguments: decentralization and block space scarcity. By considering ONLY these 2 factors, however, the BEST solution is to limit a block to only 2 transactions:
Gee. And yet no one is suggesting that. Perhaps this should suggest your understanding of other people's views is flawed, before clinging to it and insulting people with an oversimplification of their views and preventing polite discourse as a result? :-/

Quote
Had he chosen 2MB instead of 1MB, I am pretty sure that Bitcoin would have worked in exactly the same way as how it works now.
Maybe, we've suffered major losses of decentralization with even many major commercial players not running their own verifying nodes and the overwhelming majority of miners— instead relying on centralized services like Blockchain.info and mining pools. Even some of the mining pools have tried not running their own nodes but instead proxying work from other pools. The cost of running a node is an often cited reason.  Some portion if this cost may be an illusion, some may be a constant (e.g. software maintenance), but to the extent that the cost is proportional to the load on the network having higher limits would not be improving things.

What we saw in Bitcoin last year was a rise of ludicrously inefficient services— ones that bounced transactions through several addresses for every logical transaction made by users, games that produced a pair of transactions per move, etc. Transaction volume rose precipitously but when fees and delays became substantial many of these services changed strategies and increased their efficiency.   Though I can't prove it, I think it's likely that there is no coincidence that the load has equalized near the default target size.

Quote
We want to maximize miner profit because that will translate to security.
But this isn't the only objective, we also must have ample decentralization since this is what provides Bitcoin with any uniqueness or value vs the vastly more efficient centralized payment systems.

Quote
We need to find a reasonable balance
Agreed.

Quote
but 1MB is definitely not a good one.
At the moment it seems fine. Forever? not likely— I agree, and on all counts. We can reasonable expect available bandwidth, storage, cpu-power, and software quality to improve. In some span of time 10MB will have similar relative costs to 1MB today, and so all factors that depend on relative costs will be equally happy with some other side.

Quote
Assume that we aim at paying $1 million/block ($52 billion/year) to secure the network (I consider this as a small amount if Bitcoin ever grows to a trillion market cap). The current 7tps llimit will require a fee of $238/tx, which is way too expensive even for a global settlement network among banks.
This is ignoring various kinds of merged mining income, which might change the equation somewhat... but this is hard to factor in today.

Quote
I think a reasonable choice, with the implications for centralization considered, would be around 100MB/block. That takes 1.5Mb/s of bandwidth in a perfect scenario. That would be a better equilibrium in technical and economical terms.
I think at the moment— based on how we're seeing things play out with the current load levels on the network— I think 100MB blocks would be pretty much devastating to decentralization, in a few years— likely less so, but at the moment it would be even more devastating to the existence of a fee market.

Have Monero and ByteCoin fixed the bloat problem, or did the transaction spammers just get bored and go away?
Yes, sort of— fee requirements at major pools, monero apparently planning a hard-fork to change the rules, I'm not sure where thats standing— I'll ping some of their developers to comment.  Monero's blockchain size is currently about 2.1GBytes on my disk here.

My understanding is that gmaxwell and andytoshi (et. al.?) have come up with "substantial cryptographic improvements" to the BCN system which potentially are a "pretty straight forward to add to Bitcoin" as per gmaxwell, see:  https://download.wpsoftware.net/bitcoin/wizardry/brs-arbitrary-output-sizes.txt and previous comment(s) cited in this thread.  However, I still have my (unanswered) questions, to wit:
How would this output-encoding scheme work realistically for something of *every possible size?*  
Assuming this were applied to bitcoin as an option [much as SharedCoin is in blockchain.info], wouldn't it still come at a cost both in terms of size of the data corresponding to whatever transactions involved the scheme in the cases where users choose to utilize it, as well as corresponding additional fee(s)?  
How are the scalability issue(s) addressed?
The improvements Andrew and I came up with do not change the scalablity at all, they change the privacy (and do work for all possible sizes), and since its not scalability related it's really completely off-topic for this thread.
evanito
Member
**
Offline Offline

Activity: 83
Merit: 10

Your average Bitcoin/Ethereum enthusiast


View Profile
August 22, 2014, 03:19:19 AM
 #31

I dont think the size per block matters, as long as we can improve the transactions per second cap.
As you said, 7 transactions per second is miniscule, and we should focus on solving the perfect balance for maximum speed.
jl2012
Legendary
*
Offline Offline

Activity: 1792
Merit: 1097


View Profile
August 22, 2014, 03:43:05 AM
 #32

1MB-block supporters have 2 major arguments: decentralization and block space scarcity. By considering ONLY these 2 factors, however, the BEST solution is to limit a block to only 2 transactions:
Gee. And yet no one is suggesting that. Perhaps this should suggest your understanding of other people's views is flawed, before clinging to it and insulting people with an oversimplification of their views and preventing polite discourse as a result? :-/


Quote
Quote
but 1MB is definitely not a good one.
At the moment it seems fine. Forever? not likely— I agree, and on all counts. We can reasonable expect available bandwidth, storage, cpu-power, and software quality to improve. In some span of time 10MB will have similar relative costs to 1MB today, and so all factors that depend on relative costs will be equally happy with some other side.

Some of the 1MB-block supporters believe we should keep the limit forever, and move 99.9% of the transactions to off-chain. I just want to point out that their logic is completely flawed.



Quote
Quote
Had he chosen 2MB instead of 1MB, I am pretty sure that Bitcoin would have worked in exactly the same way as how it works now.
Maybe, we've suffered major losses of decentralization with even many major commercial players not running their own verifying nodes and the overwhelming majority of miners— instead relying on centralized services like Blockchain.info and mining pools. Even some of the mining pools have tried not running their own nodes but instead proxying work from other pools. The cost of running a node is an often cited reason.  Some portion if this cost may be an illusion, some may be a constant (e.g. software maintenance), but to the extent that the cost is proportional to the load on the network having higher limits would not be improving things.

I've been maintaining a node with my 100Mb/s domestic connection since 2012. It takes less than 800MB of RAM now which I have 24GB. CPU load is <0.5% of a Core i5. Harddrive space is essentially infinite. I don't anticipate any problem even if everything scales up by 10x, or 100x with some optimization.

Therefore, people are not running full node simply because they don't really care. Cost is mostly an excuse. From a development and maintenance standpoint it's just easier to rely on Blockchain.info than running a full node. People are not solo mining mostly because of variance, as only pools could survive when the difficulty is growing by 20% every 2 weeks. This may sound bad but majority of the commercial players and miners are here for profit, not the ideology of Bitcoin.

At the end of the day, theoretically, we only require one honest full node on the network to capture all the wrongdoing in the blockchain, and tell the whole world. I'm pretty sure we will have enough big bitcoin whales and altruistic players to maintain full nodes as long as the cost is reasonable, say $100/month. I don't think we will hit this even by scaling up 1000x

The real problem for scaling is probably in mining. I hope Gavin's O(1) propagation would help a bit.

Quote
What we saw in Bitcoin last year was a rise of ludicrously inefficient services— ones that bounced transactions through several addresses for every logical transaction made by users, games that produced a pair of transactions per move, etc. Transaction volume rose precipitously but when fees and delays became substantial many of these services changed strategies and increased their efficiency.   Though I can't prove it, I think it's likely that there is no coincidence that the load has equalized near the default target size.

If the limit was 2MB, the load would be higher, but not doubled. Some people have to shutdown their bitcoind but we should still have more than enough full nodes to maintain a healthy network. Core developers may have a different development priority (e.g. optimization of network use rather than payment protocol). These are not questions of life or death.

I hate those spams too but I also recognize that a successful bitcoin network has to be able to handle much more than that.

Quote
Quote
Assume that we aim at paying $1 million/block ($52 billion/year) to secure the network (I consider this as a small amount if Bitcoin ever grows to a trillion market cap). The current 7tps llimit will require a fee of $238/tx, which is way too expensive even for a global settlement network among banks.
This is ignoring various kinds of merged mining income, which might change the equation somewhat... but this is hard to factor in today.

Merge mining incurs extra cost, with the same scale property of bitcoin. I'm not sure how bitcoin mining could be substantially funded by merge mining.

Quote
Quote
I think a reasonable choice, with the implications for centralization considered, would be around 100MB/block. That takes 1.5Mb/s of bandwidth in a perfect scenario. That would be a better equilibrium in technical and economical terms.
I think at the moment— based on how we're seeing things play out with the current load levels on the network— I think 100MB blocks would be pretty much devastating to decentralization, in a few years— likely less so, but at the moment it would be even more devastating to the existence of a fee market.

I'm just trying to set a realistic target, not saying that we should raise the limit to 100MB today. However, the 1MB limit will become a major limiting factor much soon, most likely in 2 years.

Donation address: 374iXxS4BuqFHsEwwxUuH3nvJ69Y7Hqur3 (Bitcoin ONLY)
LRDGENPLYrcTRssGoZrsCT1hngaH3BVkM4 (LTC)
PGP: D3CC 1772 8600 5BB8 FF67 3294 C524 2A1A B393 6517
jl2012
Legendary
*
Offline Offline

Activity: 1792
Merit: 1097


View Profile
August 22, 2014, 04:41:11 AM
 #33

This is how I see the problem:

1. 1MB was just an arbitrary choice to protect the network at alpha stage. Satoshi made it clear that he intended to raise it.

2. With some calculation I think 100MB is a realistic target, to keep the whole thing reasonably decentralized, to charge a competitive transaction fee, and to offer enough profit for miners to keep the network safe.

3. To reach the 100MB target we should raise it gradually

4. We should consider to limit the growth of UTXO set if the MAX_BLOCK_SIZE is increased. For each block, calculate (total size of new outputs - total size of spent outputs) and put a limit on it.

5. We won't know the price of bitcoin in the future. Requesting miners to give up a fixed amount of bitcoin for a bigger block size could become problematic.

6. I have demonstrated that the block size could be increased with a soft-fork. I would like to know whether people prefer a cumbersome soft-fork as I suggested (https://bitcointalk.org/index.php?topic=283746.0), or a simple hard-fork as Satoshi suggested (https://bitcointalk.org/index.php?topic=1347.msg15366#msg15366). Either choice has its own risk and benefit.

Donation address: 374iXxS4BuqFHsEwwxUuH3nvJ69Y7Hqur3 (Bitcoin ONLY)
LRDGENPLYrcTRssGoZrsCT1hngaH3BVkM4 (LTC)
PGP: D3CC 1772 8600 5BB8 FF67 3294 C524 2A1A B393 6517
gmaxwell
Moderator
Legendary
*
expert
Offline Offline

Activity: 4172
Merit: 8419



View Profile WWW
August 22, 2014, 05:31:13 AM
 #34

Some of the 1MB-block supporters believe we should keep the limit forever, and move 99.9% of the transactions to off-chain. I just want to point out that their logic is completely flawed.
Can you cite these people specifically?  The strongest I've seen is that it "may not be necessary" and shouldn't be done unless the consequences are clear (including mining incentives, etc), the software well tested, etc.

Quote
I've been maintaining a node with my 100Mb/s domestic connection since 2012. It takes less than 800MB of RAM now which I have 24GB. CPU load is <0.5% of a Core i5. Harddrive space is essentially infinite. I don't anticipate any problem even if everything scales up by 10x, or 100x with some optimization.
Great. I live in the middle of silicon valley and no such domestic connection is available at any price (short of me paying tens of thousands of dollars NRE to lay fiber someplace). This is true for much of the world today.

Quote
Therefore, people are not running full node simply because they don't really care. Cost is mostly an excuse.
I agree with this partially, but I know it's at all the whole truth of it. Right now, even on a host with solid gigabit connectivity you will take days to synchronize the blockchain— this is due to dumb software limitations which are being fixed... but even with them fixed, on a quad core i7 3.2GHz and a fast SSD you're still talking about three hours. With 100x that load you're talking about 30 hours— 12.5 days.

Few who are operating in any kind of task driven manner— e.g. "setup a service" are willing to tolerate that, and I can't blame them.

Quote
People are not solo mining mostly because of variance,
There is no need to delegate your mining vote to a third party to mine— it would be perfectly possible for pools to pay according to shares that pay them, regardless of where you got your transaction lists from— bur they don't do this.

Quote
At the end of the day, theoretically, we only require one honest full node on the network to capture all the wrongdoing in the blockchain, and tell the whole world.
And tell them what?  That hours ago the miners created a bunch of extra coin out of thin air ("no worries, the inflation was needed to support the economy/security/etc. and there is nothing you can do about it because it's hours burried and the miners won't reorg it out and any attempt to do so opens up a bunch of double spending risk")—  How exactly does this give you anything over a centralized service that offers to let people audit it?  In both there can always be some excuse good enough to get away with justifying compromising the properties once you've left the law of math and resorted to enforcement by men and politics.

In the whitepaper a better path is mentioned that few seem to have noticed "One strategy to protect against this would be to accept alerts from network nodes when they detect an invalid block, prompting the user's software to download the full block and alerted transactions to confirm the inconsistency". Sadly, I'm not aware of even any brainstorming behind what it would take to make that a reality beyond a bit I did a few years ago. (... even if I worked on Bitcoin full time I couldn't possibly personally build all the things I think we need to build, there just isn't enough hours in the day)

That isn't the whole tool in the belt, but I point it out to highlight that what you're suggesting above is a real and concerning relaxation of the security model, which moves bitcoin closer to the trust-us-were-lolgulated-banking-industry... and it that it is not at all obvious to me that such compromises are necessary.

It's beyond infuriating to me when I hear a dismissive tone, since pretending these things don't have a decentralization impact removes all interest from working on the technical tools needed to bridge the gap.

Quote
The real problem for scaling is probably in mining.
I'm not sure why you think that— miners are paid for their participation. Some of them habe been extract revenue on the hundred thousands dollars a month in fees from their hashers. There is a lot of funds to pay for equipment there.

Quote
I hate those spams
Oh, I wasn't trying to express any opinion/dislike on the inefficient use but to point out that to some extent load expands to fill capacity, and if the price is too low people will use it wastefully or selfishly.

Quote
Merge mining incurs extra cost, with the same scale property of bitcoin. I'm not sure how bitcoin mining could be substantially funded by merge mining.
Same cost for miners, who are paid for their resources. Not the same cost for verifiers, because not everyone has to verify everything.

Quote
I'm just trying to set a realistic target, not saying that we should raise the limit to 100MB today. However, the 1MB limit will become a major limiting factor much soon, most likely in 2 years.
In spite of all the nits I'm picking above I agree with you in broad strokes.
jl2012
Legendary
*
Offline Offline

Activity: 1792
Merit: 1097


View Profile
August 22, 2014, 08:17:36 AM
Last edit: August 22, 2014, 08:57:18 AM by jl2012
 #35

Some of the 1MB-block supporters believe we should keep the limit forever, and move 99.9% of the transactions to off-chain. I just want to point out that their logic is completely flawed.
Can you cite these people specifically?  The strongest I've seen is that it "may not be necessary" and shouldn't be done unless the consequences are clear (including mining incentives, etc), the software well tested, etc.

I am not going to cite to provoke unnecessary debate. As you are not one of them, let's stop here.

Quote
Quote
I've been maintaining a node with my 100Mb/s domestic connection since 2012. It takes less than 800MB of RAM now which I have 24GB. CPU load is <0.5% of a Core i5. Harddrive space is essentially infinite. I don't anticipate any problem even if everything scales up by 10x, or 100x with some optimization.
Great. I live in the middle of silicon valley and no such domestic connection is available at any price (short of me paying tens of thousands of dollars NRE to lay fiber someplace). This is true for much of the world today.

This kind of connection is available in Hong Kong for maybe 10 years. Today, with $20/mo we have 100Mb/s. We can even have an 1Gb/s fiber directly connected to the computer at home with only $70/mo.

Anyway, I know our case is atypical. However, I'd be really surprised if you can't do the same in silicon valley in 10 years. Also, I doubt one would have any difficulty to rent an 1U collocation space with 100Mb/s for $100/mo in silicon valley.

Quote
Quote
Therefore, people are not running full node simply because they don't really care. Cost is mostly an excuse.
I agree with this partially, but I know it's at all the whole truth of it. Right now, even on a host with solid gigabit connectivity you will take days to synchronize the blockchain— this is due to dumb software limitations which are being fixed... but even with them fixed, on a quad core i7 3.2GHz and a fast SSD you're still talking about three hours. With 100x that load you're talking about 30 hours— 12.5 days.

Few who are operating in any kind of task driven manner— e.g. "setup a service" are willing to tolerate that, and I can't blame them.

Most of them are here for profit so they won't do it anyway, no matter it's 1MB or 100MB.

Also, I can't see why we really need to verify every single transaction back to the genesis block. If there were no fork floating around, and no one is complaining in the last few months, it's really safe to assume the blockchain (until a few months ago) is legitimate.

Quote
Quote
People are not solo mining mostly because of variance,
There is no need to delegate your mining vote to a third party to mine— it would be perfectly possible for pools to pay according to shares that pay them, regardless of where you got your transaction lists from— bur they don't do this.

Again, they won't do it no matter it's 1MB or 100MB, unless the protocol forces them to do so

Quote
Quote
At the end of the day, theoretically, we only require one honest full node on the network to capture all the wrongdoing in the blockchain, and tell the whole world.
And tell them what?  That hours ago the miners created a bunch of extra coin out of thin air ("no worries, the inflation was needed to support the economy/security/etc. and there is nothing you can do about it because it's hours burried and the miners won't reorg it out and any attempt to do so opens up a bunch of double spending risk")—  How exactly does this give you anything over a centralized service that offers to let people audit it?  In both there can always be some excuse good enough to get away with justifying compromising the properties once you've left the law of math and resorted to enforcement by men and politics.

In the whitepaper a better path is mentioned that few seem to have noticed "One strategy to protect against this would be to accept alerts from network nodes when they detect an invalid block, prompting the user's software to download the full block and alerted transactions to confirm the inconsistency". Sadly, I'm not aware of even any brainstorming behind what it would take to make that a reality beyond a bit I did a few years ago. (... even if I worked on Bitcoin full time I couldn't possibly personally build all the things I think we need to build, there just isn't enough hours in the day)

That isn't the whole tool in the belt, but I point it out to highlight that what you're suggesting above is a real and concerning relaxation of the security model, which moves bitcoin closer to the trust-us-were-lolgulated-banking-industry... and it that it is not at all obvious to me that such compromises are necessary.

It's beyond infuriating to me when I hear a dismissive tone, since pretending these things don't have a decentralization impact removes all interest from working on the technical tools needed to bridge the gap.


Why that would take hours to broadcast a warning like that? Let say you are a merchant and you can't afford you own full node. You primarily rely on Blockchain.info but you also run an SPV client to monitor the block headers. As long as one of your peers is honest, you should be able to detect any problem in the data of Blockchain.info within 6 confirmations (since the block headers won't match).


Quote
Quote
The real problem for scaling is probably in mining.
I'm not sure why you think that— miners are paid for their participation. Some of them habe been extract revenue on the hundred thousands dollars a month in fees from their hashers. There is a lot of funds to pay for equipment there.

I mean, a 100MB block is not a problem for bitcoin whales and altruistic players to run full nodes, and we will have enough honest full nodes to support SPV clients. For miners, however, as block propagation is crucial for their profit, a big block with O(n) propagation time will cause problem. Gavin's O(1) proposal gives some hope, but I have to admit I don't understand the maths behind it.

Quote
Quote
I'm just trying to set a realistic target, not saying that we should raise the limit to 100MB today. However, the 1MB limit will become a major limiting factor much soon, most likely in 2 years.
In spite of all the nits I'm picking above I agree with you in broad strokes.


No matter that will be a hardfork, or an auxiliary block softfork, this will be the most dramatic change to the protocol. However, I can't see any real progress in reaching consensus despite years of debate.

Donation address: 374iXxS4BuqFHsEwwxUuH3nvJ69Y7Hqur3 (Bitcoin ONLY)
LRDGENPLYrcTRssGoZrsCT1hngaH3BVkM4 (LTC)
PGP: D3CC 1772 8600 5BB8 FF67 3294 C524 2A1A B393 6517
solex
Legendary
*
Offline Offline

Activity: 1078
Merit: 1002


100 satoshis -> ISO code


View Profile
August 23, 2014, 07:03:20 AM
 #36

No matter that will be a hardfork, or an auxiliary block softfork, this will be the most dramatic change to the protocol. However, I can't see any real progress in reaching consensus despite years of debate.

Indeed, but it is now apparent (to me anyway) that simply increasing the block limit to allow larger and larger blocks to propagate is not necessary, as this is not the optimal long-term solution. The optimal solution takes advantage of the fact that most transactions are already known to most peers before the next block is mined. So, "highly abbreviated" new blocks can be propagated instead. This is beyond mere data compression because it relies on the receiver knowing most of the block contents in advance.

We see in the O(1) thread that there are excellent proposals on the table for block propagation efficiency:
A) short transaction hashes: as in block network coding, and similarly in the optimized block relay (Matt Corallo already has a relay service live)
B) IBLT blocks

Even better, they are compatible such that A can be used within B giving enormous efficiency gains. This must be the long-term goal.

The next question is: Can the max block size be made flexible (for example: a function of the median size of the previous 2016 blocks) as a phase in the process of introducing block propagation efficiency as a consensus change?

coinft
Full Member
***
Offline Offline

Activity: 187
Merit: 100



View Profile
August 23, 2014, 12:58:17 PM
 #37

No matter that will be a hardfork, or an auxiliary block softfork, this will be the most dramatic change to the protocol. However, I can't see any real progress in reaching consensus despite years of debate.

Indeed, but it is now apparent (to me anyway) that simply increasing the block limit to allow larger and larger blocks to propagate is not necessary, as this is not the optimal long-term solution. The optimal solution takes advantage of the fact that most transactions are already known to most peers before the next block is mined. So, "highly abbreviated" new blocks can be propagated instead. This is beyond mere data compression because it relies on the receiver knowing most of the block contents in advance.

We see in the O(1) thread that there are excellent proposals on the table for block propagation efficiency:
A) short transaction hashes: as in block network coding, and similarly in the optimized block relay (Matt Corallo already has a relay service live)
B) IBLT blocks

Even better, they are compatible such that A can be used within B giving enormous efficiency gains. This must be the long-term goal.

The next question is: Can the max block size be made flexible (for example: a function of the median size of the previous 2016 blocks) as a phase in the process of introducing block propagation efficiency as a consensus change?

As far as I understand those schemes they are only good if you run a node with a current memory pool. The full transactions still need to communicated at some time, and still need to be written to the blockchain in full. You couldn't just write IBLTs to the blockchain, because no one without your memory pool could reconstruct the TXs.
lnternet
Sr. Member
****
Offline Offline

Activity: 299
Merit: 253


View Profile
August 28, 2014, 06:09:58 PM
 #38

I think I'm not alone when I say we are dangerously close to hitting the limit already that we need a quick fix right away. The only easy and quick fix is raising the limit, and to not make people upset, just double it to 2MB.

Reading about soft fork vs hard fork, this appears to be more difficult than a simple user like me imagines. But something needs to be done soon. Getting tx stuck when the next boom hits is something we can all agree on shouldn't happen.



In related news, I also feel spamming the network is way too cheap right now. I can spam 10 tx/s for less than 9BTC for a full day (assuming 0.01 mBTC fee). If I was in a position planning to buy in big time, I would do this, expecting a price drop with probability high enough to make the whole endeavor worth it.

1ntemetqbXokPSSkuHH4iuAJRTQMP6uJ9
gmaxwell
Moderator
Legendary
*
expert
Offline Offline

Activity: 4172
Merit: 8419



View Profile WWW
August 28, 2014, 10:43:47 PM
Last edit: August 28, 2014, 11:07:32 PM by gmaxwell
 #39

The next question is: Can the max block size be made flexible (for example: a function of the median size of the previous 2016 blocks) as a phase in the process of introducing block propagation efficiency as a consensus change?
Letting miners _freely_ expand blocks is a bit like asking the foxes to guard the hen-house— it's almost equivalent to no limit at all. Individual miners have every incentive to put as much fee paying transactions in their own blocks as they can (esp. presuming propagation delays are resolved or the miner has a lot of hashpower and so propagation hardly matters)— because they only need verify once the cost of a few more cpus or disks isn't a big deal. In theory (I say because miners have been bad with software updates), they can afford the operating costs of fixing things like integer overflows that arise with larger blocks, especially since they usually have little customization— other nodes, not so much?

Since miners can always put fewer transactions in, it's not unreasonable for the block-chain to coordinate that soft-limit (— in the hopes that baking in the conspiracy discourages worse ones from forming). But in that case it shouldn't be based on the actual size, but instead on an explicit limit, so that expressing your honest belief that the network would be better off with smaller blocks is not at odds with maximizing your revenue in this block.

If you want to talk about new limit-agreement mechanisms, I think that txout creation is more interesting to limit than the size directly though... or potentially both.

Even for these uses— Median might not be the right metric, however— consider that recently it would have giver control to effectively a single party, at the moment it would effectively give it to two parties. You could easily use the median for lowering the limit and (say) the 25th percentile for raising it though... though even thats somewhat sloppy because having more than half the hashrate in your conspiracy means you can have all the hashrate if you really want it. Sad
WhiteBeard
Full Member
***
Offline Offline

Activity: 156
Merit: 102


Bean Cash - More Than a Digital Currency!


View Profile WWW
August 29, 2014, 12:04:53 AM
Last edit: August 29, 2014, 03:59:29 PM by WhiteBeard
 #40

While this may not directly address block size, it will have an effect on it as well as other aspects of the entire system.

There is an inefficiency in bitcoin related to people needing to consolidate their inputs and it ends up costing the system more than the transaction fees are worth due to unnecessary traffic.   What we need to do is remove the need for people to consolidate all their tiny transactions, by doing it for them automatically.

Couldn't we add a feature to the wallet system that automatically consolidates all extant inputs to a particular address if there are any inputs of at least X days of age into one transaction, by adding a marker/token to a new block that effectively authenticates the presence of all these inputs from previous blocks, so there is no need to search further back down the block-chain for authentication of the inputs. Thus decreasing the data size of any future outputs from that address. I would make it so that it would, also, automatically re-consolidate the returned change inputs by adding them back to the address from which they originally came without having them travel the block-chain again.

To take this further, it effectively would be an auto-pruning measure, because at some point you would never need to go back to the genesis block for authentication.

This, ultimately, may or may not have an effect on difficulty, so care needs to be taken that we do not introduce inflation into the economy.

What size should each block be?  I'd say as compact as we can make them with as much data as we can cram into them. My proposal helps with that aspect.  Eventually, we will see the block-chain housing significantly more data than it already does.  

There are advantages and disadvantages to every proposal so far.  Does increasing/decreasing block sizes contribute to maintenance of current and future difficulty standards and thus have a positive or negative impact on inflation?  When we get a protocol in place to begin pruning we will then see a need to balance block size against chain length. To that end, I suppose, there will have to be a block size growth factor calculated in or the chain will become "unmanageably" long even with pruning and difficulty will skyrocket, especially as seen against the back-drop of the goal of bitcoin to become the currency of the world economy...

Just my ideas! What do you all think?

whitebeard

Pages: « 1 [2] 3 »  All
  Print  
 
Jump to:  

Powered by MySQL Powered by PHP Powered by SMF 1.1.19 | SMF © 2006-2009, Simple Machines Valid XHTML 1.0! Valid CSS!