Bitcoin Forum
December 16, 2017, 06:16:19 AM *
News: Latest stable version of Bitcoin Core: 0.15.1  [Torrent].
 
   Home   Help Search Donate Login Register  
Pages: « 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 [17] 18 19 20 21 22 23 24 »  All
  Print  
Author Topic: Permanently keeping the 1MB (anti-spam) restriction is a great idea ...  (Read 103902 times)
DeathAndTaxes
Donator
Legendary
*
Offline Offline

Activity: 1218


Gerald Davis


View Profile
February 13, 2015, 05:08:37 PM
 #321

OP, Weren't you vehemently against raising the limit few years ago? I think I remember a lot of intellectual technical discussion around here involving you, gmaxwell and others regarding this matter. Around v0.0.9?

I don't recall that being the case but I could be wrong (on other issues my opinion has changed over time Smiley ).  I do recall around that time (I assume you mean v0.9.0 not v0.0.9) the backlog of unconfirmed transactions was growing.  The reason is that while the soft limit had been raised the default value hard coded in the bitcoind was 250KB and most miners never changed that.  The only miners which were targeting a different size were targeting smaller not larger blocks.  Even as the backlog grew they felt no urgency in changing it.  Some uninformed people advocated raising the 1MB cap as a way to "fix" the problem.  I tried (and mostly) failed explaining that miners could already make blocks at least 4x larger and were opting not to.  Changing the max only allows larger blocks if miners decide to make them.  The developers ended up forcing the issue by first raising the default block size so that it didn't remain fixed at 250KB and them removing the default size all together.  Today bitcoind require you to set an explicit block size.  If you don't set one you can't mine.

Quote
I am personally very much against hard forks of such. However I am all in for new crypto with new parameter that considers how a previous crypto has lagged behind. From my point of view a hard fork with a lot of publicity to adhere to and "update" to keep up with is simply an act of how a few control the mass. Whether it is for a good reason or bad or whatever, It really breaks the original principles of decentralization and fairness.

I have to disagree.  Bitcoin is a protocol and protocols change overtime.  Constantly reinventing the wheel means a lot of wasted effort.  You end up with dozens of half used systems instead of one powerful one.  Look at TCP/IP or even HTML.  Yeah they have a lot of flaws, they also have a lot of early assumptions backed in which produce inefficiency.  Evolution is also hard. Look at the mess of browser standards or how long the migration to IPv6 has taken.  Despite the problems the world is better for having widely deployed protocols in favor on constantly starting over.  

In the crypto space however 'starting over' means a chance at catching lightning in a bottle and becoming insanely rich.  That has lead to a lot of attempts but not much has come from it so far.  Alternates are an option but they shouldn't be the first option.  The first option should be evolution but if a proposal fails and a developer strongly believes that in the long run the network can't be successful without it then it should be explored in an alternate system.  There are some things about Bitcoin which may be impossible to change and for those an alternate might be the only choice.  The block size isn't one of them.

Jumping specifically to the block size.  The original client had no* block size limit. The fork occurred when it was capped down to 1MB.  The 1MB has in some circles become a holy text but there isn't anything which indicates it had any significance at that time.  It was a crude way to limit the damage caused by spam or a malicious attacker.  It is important to understand that the cap doesn't even stop spam (either malicious or just wasteful).  The cost of mining, the dust limit and the min fee to relay low priority txns are what reduced spam by making it less economical.

The cap still allowed the potential for abuse but the scope of that abuse is limited.  If 10% of the early blocks were maxed out it would still have added 5GB per year to the blockchain size.  That would have been bad but it would have been survivable and would have required the consent of 10% of the hashrate.  Without the cap a single malicious entity could have increased the cost of joining the network by a lot more.  Imagine if before you even knew about Bitcoin and the only client was the full node that to join the network would have required downloading 100GB or 500GB.  Would Bitcoin even be here today if that had happened?  

Satoshi stated in the white paper that consensus rules can be changed if needed.  He also directly stated that the block limit was temporary and could be increased in the future and phased in at a higher block.  Now I am not saying everything Satoshi said is right but I have to disagree that the 'original principles of decentralization and fairness' preclude changes to the protocol.  Some people may today believe that any change invalidations the original principles but that was never stated as an original principle. The protocol has always been changeable by an 'economic majority' and the likewise that majority can't prevent the minority from continuing to operate the unmodified protocol.  It is impossible to design an open protocol which can't be changed however as a practical matter a fork (any fork) will only be successful if a supermajority of users, miners, developers, companies, service providers, etc support the change.

There are four universal truths about open source peer to peer networks:
a) It is not possible to prevent a change to the protocol.
b) A change in the protocol will lead to two mutually incompatible networks.
c) These parallel networks will continue to exist until one network is abandoned completely.
d) There is no mechanism to force users to switch networks so integration is only possible through voluntary action.

There is a concept called the tyranny of the minority.  It isn't possible for a protocol to prevent changes without explicit approval of every single user but even if it was that would not be an anti-fragile system.  A bank could purchase a single satoshi, hang on to it, and use that as a way to prevent any improvements to the protocol ensuring it will eventually fail.  The earliest fork fixed a bug which allowed to the creation of billions of coins.  There is no evidence it has universal support.  The person who used it to create additional coins saw the new version erased coins that the prior network declared valid.  Still a fork is best if either a negligible number of people support it or a negligible number of people oppose it.  The worst case scenario would be a roughly 50-50 split and both forks continuing to co-exist.  

The original client did have a 33.5MB constraint on a message length.   It could be viewed as an implicit limit on block sizes as the current protocol transmits complete blocks as a single message.  There is nothing to indicate that Satoshi either intended this to be permanent or that it was a foundational part of the protocol.  It is a simplistic constraint that prevents an attack were a malicious or bugged client sends nodes an incredibly long message which needs to be received before it can be processed and invalidated.  Imagine your client having to download an 80TB message before it could determine that the message was invalid and then doing that a dozen times before banning that node.  Sanity checks are always a good idea to remove edge cases.
Advertised sites are not endorsed by the Bitcoin Forum. They may be unsafe, untrustworthy, or illegal in your jurisdiction. Advertise here.
1513404979
Hero Member
*
Offline Offline

Posts: 1513404979

View Profile Personal Message (Offline)

Ignore
1513404979
Reply with quote  #2

1513404979
Report to moderator
1513404979
Hero Member
*
Offline Offline

Posts: 1513404979

View Profile Personal Message (Offline)

Ignore
1513404979
Reply with quote  #2

1513404979
Report to moderator
jmw74
Full Member
***
Offline Offline

Activity: 236


View Profile
February 13, 2015, 08:32:20 PM
 #322

Satoshi stated in the white paper that consensus rules can be changed if needed.  

He didn't even need to state that, anyone can start an alternative network that forks from the existing one. Whether the majority follows the changed fork or the unchanged fork is immaterial.


Quote
There are four universal truths about open source peer to peer networks:
a) It is not possible to prevent a change to the protocol.
b) A change in the protocol will lead to two mutually incompatible networks.
c) These parallel networks will continue to exist until one network is abandoned completely.
d) There is no mechanism to force users to switch networks so integration is only possible through voluntary action.

There is a concept called the tyranny of the minority.  It isn't possible for a protocol to prevent changes without explicit approval of every single user but even if it was that would not be an anti-fragile system.  A bank could purchase a single satoshi, hang on to it, and use that as a way to prevent any improvements to the protocol ensuring it will eventually fail.  The earliest fork fixed a bug which allowed to the creation of billions of coins.  There is no evidence it has universal support.  The person who used it to create additional coins saw the new version erased coins that the prior network declared valid.  Still a fork is best if either a negligible number of people support it or a negligible number of people oppose it.  The worst case scenario would be a roughly 50-50 split and both forks continuing to co-exist.  

What you mean is, you can't prevent other people from using a different protocol.

They can't change YOUR protocol, but they can leave you all alone with few other people (or none) to talk to.

Still you have a choice. You can accept a smaller consensus. Let's say someone forked bitcoin to change the rules so that instead of 21 million coins, an infinite number would be produced (25 coins per block forever, for example). And let's say 90% of bitcoin users accepted that fork.

If you don't want to accept this change, all it means is, your consensus size shrunk, probably back to 2011 levels. Personally I would accept this before infinite coins.

Just because you lose 90% of the consensus doesn't mean you'll eventually lose 100% of it.
solex
Legendary
*
Offline Offline

Activity: 1078


100 satoshis -> ISO code


View Profile
February 13, 2015, 09:03:17 PM
 #323

HForks will become virtually impossible in the future as too many opinionated developers bash each other. My only hope is that the important scalability changes are made before there are too many dependencies. Ossification is rapidly approaching.  

That is why the 1MB limit is the single most important threat to Bitcoin, because ossification is rapidly approaching. The limit can irreparably damage Bitcoin's future growth, its winner-take-all honey-badger image. Its default status as an "electronic gold" store-of value is badly tarnished if it can be allowed to run into a brick wall when there was years of warnings, threads and discussions beforehand. For non-technical supporters and investors the fear will remain that other major problems are latent and ignored.

Some people argue against increasing the limit because they "want Bitcoin the way it is now", that "it is working fine". The reality is counter-intuitive. The way it is now is that it has no effective block limit, and that allowing the average block size to approach 1MB is to introduce a massive untested change into every full node, simultaneously. I say untested because the only comparable event was the 250KB soft-limit which rapidly went wrong, and the major mining pools had to come to the rescue and increase their default block limit. With the 1MB the number of nodes which need to quickly upgrade is several thousand times larger. Chaos.

People are worried about government action, yet for every government which tries to ban Bitcoin another will give it free rein, for every government that wants to over-regulate it, another will decide upon regulation-lite. The threat is from within, not without.

Off-chain solutions are developing. An Overstock purchase by a Coinbase account holder does not hit the blockchain. There must be many business flows which are taking volume away from the main-chain. But this is not enough to stabilize the average block size.

Challenges are also opportunities. A successful hard fork, executed as Gavin suggests with a block version supermajority, will take many months and allow a smooth transition to a scaling state. Hard forks on the wish list, and ideas like stale dust reverting to the miners, (if there is majority consensus) will be seen as achievable. Ossification might be delayed a little to allow other hard-fork improvements if this first challenge is successfully handled.


grau
Hero Member
*****
Offline Offline

Activity: 836


bits of proof


View Profile WWW
February 13, 2015, 10:36:51 PM
 #324

That is why the 1MB limit is the single most important threat to Bitcoin, because ossification is rapidly approaching.
The limit can irreparably damage Bitcoin's future growth, its winner-take-all honey-badger image.

The block limit increase may eventually pass as an intentional hard fork for good reasons, but I think it is not relevant for the long term fate of Bitcoin, the digital gold.

It would be naive to assume that the block size limit is the last feature we need to change to ensure eternal prosperity for Bitcoin. We will hit other limits or yet unknown issues and the problem will repeat, but by that time we will have even less, likely zero, chances to orchestrate a hard fork.

The block chain as we know bootstrapped the digitial gold, but is unlikely its final home. We have to restore the ability to innovate and enable individuals to express their opinion by moving their digital gold to a side chain they prefer for whatever reason.

I know it was simpler if one was not forced to compare alternatives but simply HODL, but that is unfortunatelly orthogonal to innovation and diverse needs of a global adoption.

CoinCidental
Legendary
*
Offline Offline

Activity: 1246


Si vis pacem, para bellum


View Profile
February 14, 2015, 11:16:09 AM
 #325

Could it become like those congressional bills where all the crap legislation gets stuck inside the fine print of the guns 'n drugs 'n terrroists 'n kids cover page?

While in theory that is the most sensible idea, in practice, adding other changes will only slow down the implementation of what already has proved to be very contentious (yet shouldn't have been).


Heh.  It would be a good idea, for example, to add code that sweeps "dust" more than 8 years old (ie, starting with the very oldest dust, at the beginning of next year) into the miners' pockets.  If something has been sitting there for 8 years and it's too small to pay for the fees that would be needed to spend it, then it's useless to its present owner, burdensome to the whole network to keep track of, and ought to be aggregated and paid to miners for network security. 

But if you think the blocksize discussion is contentious?  Sweeping dust would make the ultraconservatives and ultralibertarians here absolutely foam at the mouth.  I'll not even suggest such a thing because the discussion would go absolutely off the rails.

I'd cheerfully go even further and "sweep" *any* output that's been sitting there for >20 years (ie, lost keys) into a "mining fund," then have the coinbase of each block take 0.001% of the current mining fund balance in addition to the block subsidy.  Anybody whose keys aren't actually lost can avoid the haircut by moving her funds from one pocket to another in a self-to-self transaction.



sweep the dust thats too small to transact  but anything substancial should NEVER be swept imo
some people , maybe even satoshi has left the early  blocks in a will to the kids or grandchildren etc

we have to remember that dust today might be a enough to buy a lamborghini in 50 years

others maybe young and keeping cold storage for retirement while working on other projects sway from bitcoin

i dont think wallets should be "reposessed under any circumstances " even if they appear to be abandoned for years

if we did that it would be a matter of time before someones wallet got reposessed then owner  later tried to claim it

tvbcof
Legendary
*
Offline Offline

Activity: 2352


View Profile
February 14, 2015, 12:05:29 PM
 #326

...
The original client did have a 33.5MB constraint on a message length.   It could be viewed as an implicit limit on block sizes as the current protocol transmits complete blocks as a single message.  There is nothing to indicate that Satoshi either intended this to be permanent or that it was a foundational part of the protocol.  It is a simplistic constraint that prevents an attack were a malicious or bugged client sends nodes an incredibly long message which needs to be received before it can be processed and invalidated.  Imagine your client having to download an 80TB message before it could determine that the message was invalid and then doing that a dozen times before banning that node.  Sanity checks are always a good idea to remove edge cases.

What's the problem with that?  You've never heard of Moore's law?  It's the universal faith-based psycho-engineering solution in these parts, and it seems to work fine on a lionshare of the participants.  Normally simply uttering the two words is sufficient to win any argument (hollow though the victory may be.)


DooMAD
Legendary
*
Offline Offline

Activity: 1456



View Profile WWW
February 15, 2015, 12:00:56 PM
 #327

"anti-fork" people, or should I say "pro-bitcoin" people, really sound a lot saner to me in their phrasing, there is some tangible dementia and hostility in most of the "lets fork bitcoin" folks.

You want to point fingers about hostility?  I think you'll find this discussion started because MP accused Gavin of being a scammer and saying there was no way under any circumstances that he would accept a fork.  If we had set off on a more polite note, something along the lines of "let's weigh up the pros and cons and come to some sort of compromise", then maybe things would have gone a little differently.

flound1129
Hero Member
*****
Offline Offline

Activity: 854


www.multipool.us


View Profile
February 16, 2015, 07:39:20 AM
 #328

If the block size remains at 1 MB, most of humanity will be forced by necessity to transact off chain.

(or on another chain)

Multipool - Always mine the most profitable coin - Scrypt, X11 or SHA-256!
flound1129
Hero Member
*****
Offline Offline

Activity: 854


www.multipool.us


View Profile
February 16, 2015, 07:41:15 AM
 #329

However, if block size is increased, there's really no reason why most miners won't include as many transactions as possible, since it doesn't really cost them anything. Transactors will no longer be required to pay to have their transactions included in the blockchain, and eventually profit-seeking miners will leave.

It costs mining pools nothing *now* to process transactions.  And in fact they are not even required to fill blocks up with transactions, there are empty blocks produced all the time.

Minimum fees can be set regardless of block size.

Multipool - Always mine the most profitable coin - Scrypt, X11 or SHA-256!
turvarya
Hero Member
*****
Offline Offline

Activity: 714


View Profile
February 16, 2015, 09:09:58 AM
 #330

However, if block size is increased, there's really no reason why most miners won't include as many transactions as possible, since it doesn't really cost them anything. Transactors will no longer be required to pay to have their transactions included in the blockchain, and eventually profit-seeking miners will leave.

It costs mining pools nothing *now* to process transactions.  And in fact they are not even required to fill blocks up with transactions, there are empty blocks produced all the time.

Minimum fees can be set regardless of block size.
I don't think, that is true. Looking at the last blocks, they all had transactions in them.
Could you show me one of this recent 0 transaction blocks?

https://forum.bitcoin.com/
New censorship-free forum by Roger Ver. Try it out.
Buffer Overflow
Legendary
*
Offline Offline

Activity: 1652



View Profile
February 16, 2015, 11:04:49 AM
 #331

However, if block size is increased, there's really no reason why most miners won't include as many transactions as possible, since it doesn't really cost them anything. Transactors will no longer be required to pay to have their transactions included in the blockchain, and eventually profit-seeking miners will leave.

It costs mining pools nothing *now* to process transactions.  And in fact they are not even required to fill blocks up with transactions, there are empty blocks produced all the time.

Minimum fees can be set regardless of block size.
I don't think, that is true. Looking at the last blocks, they all had transactions in them.
Could you show me one of this recent 0 transaction blocks?

Yeah it does happen occasionally. Though of course they have the 1 transaction, which is the coinbase one.

DeathAndTaxes
Donator
Legendary
*
Offline Offline

Activity: 1218


Gerald Davis


View Profile
February 16, 2015, 04:37:41 PM
 #332

I don't think, that is true. Looking at the last blocks, they all had transactions in them.
Could you show me one of this recent 0 transaction blocks?

Yeah it does happen occasionally. Though of course they have the 1 transaction, which is the coinbase one.

That is correct.  There are no zero txn blocks because a block is invalid without a coinbase so people saying 'empty' blocks are referring to a block with just the coinbase txn.  This will occur periodically due to the way that blocks are constructed.  To understand why one needs to dig a little bit into what happens when a new block is found.  

A pool server may have thousands of workers working on the current block when a new block is found making that work stale. The server needs to quickly update all its workers to 'new work' and the longer that happens the more revenue that is lost.  If a new block is found on average in 600 seconds and it takes the pool just 6 seconds to update all its workers than its actual revenue will be 0.5% lower than its theoretical revenue.   So pools want to update all their workers as quickly as possible to be as efficient as possible.

To update workers to work on a new block the server must remove all the txn confirmed in the prior block from the memory pool, then organize them into a merkle tree and compute a new merkle root.  Part of that merkle tree is the coinbase txn which is unique for each worker (that is how your pool knows your shares are yours).  A different coinbase means a different merkle tree and root hash for each worker which can be time consuming to compute.  So to save time (and reduce stale losses) the pool server will compute the simplest possible merkle tree for each worker which is a single txn (the coinbase) push that out to all workers.  Compute the full txn set merkle tree and then provide that to workers once they request new work.  However if a worker solves a block with that first work assigned it will produce an 'empty' (technically one txn) block.

This is just one example of how a 1MB limit will never achieve 1MB throughput.  All the numbers in the OP assume an upper limit which is the theoretical situation of 100% of miners producing 1MB blocks with no orphans, inefficiencies, or stale work.  In reality miners targeting less than 1MB, orphans, and inefficiencies in the network will mean real throughput is going to be lower than the limit.  VISA only has an average txn capacity of 2,000 tps but their network can handle a peak traffic of 24,000 tps.  Nobody designs a system with a specific limit and then assumes throughput will be equal to that upper limit.

Cryddit
Legendary
*
Offline Offline

Activity: 840


View Profile
February 16, 2015, 07:11:38 PM
 #333

VISA only has an average txn capacity of 2,000 tps but their network can handle a peak traffic of 24,000 tps.  Nobody designs a system with a specific limit and then assumes throughput will be equal to that upper limit.

That is a very valuable observation.  An 'adaptive' block size limit would set a limit of some multiple of the observed transaction rate, but most of its advocates (including me) haven't bothered to look up what the factor ought to be. what you looked up above presents real live information from a functioning payment system. 

The lesson being that an acceptable peak txn rate for a working payment network is about 12x its average txn rate. 

Which, in our case with average blocks being around 300 KB, means we ought to have maximum block sizes in the 3600KB range. 

And that those of us advocating a self-adjusting block size limit ought to be thinking in terms of 12x the observed utilization, not 3x the observed utilization.

David Rabahy
Hero Member
*****
Offline Offline

Activity: 707



View Profile
February 17, 2015, 03:06:52 PM
 #334

The ability to handle a large block is a functional topic; e.g. apparently there is an API limit which will force blocks to be transmitted in fragments instead of one large block.  If we want blocks larger than this limit then we have no choice but to do the code to handle fragmenting and reconstructing such large blocks.  Having an artificial block size maximum at this API limit is necessary until we do that code.  Alternatively I suppose we could look at replacing the API with another one (if there even is one) that can handle larger blocks.

The desire/need for large blocks is driven by the workload.  If the workload is too much for the block size then the backlog https://blockchain.info/unconfirmed-transactions will grow and grow until the workload subsides; this is a trivial/obvious result from queuing theory.  Well, I suppose having some age limit or other arbitrary logic dropping work would avoid the ever-growing queue but folks would just resubmit their transactions.  Granted some would argue the overload condition is ok since it will force some behavior.

I, for one, would recommend avoiding all fancy logic for an adaptive block size; just set the maximum block size to the API maximum and be done with it.  I also recommend hurrying up to figure out the code to fragment/reconstruct blocks bigger than that.

All of this is largely independent of figuring out how to suppress spam transactions.
2112
Legendary
*
Offline Offline

Activity: 1988



View Profile
February 17, 2015, 05:01:09 PM
 #335

I, for one, would recommend avoiding all fancy logic for an adaptive block size; just set the maximum block size to the API maximum and be done with it.  I also recommend hurrying up to figure out the code to fragment/reconstruct blocks bigger than that.
What is the point of this? The contiguous "block" is just an artifact of the current implementation and will soon be obsoleted.
On the storage side the database schema will change to support pruning. On the network side the protocol will be updated to support more efficient transmission via IBLT and other mechanisms that reduce duplication on the wire. In both cases the "blocks" will never appear anywhere as a contiguous blobs of bytes.

Please comment, critique, criticize or ridicule BIP 2112: https://bitcointalk.org/index.php?topic=54382.0
Long-term mining prognosis: https://bitcointalk.org/index.php?topic=91101.0
David Rabahy
Hero Member
*****
Offline Offline

Activity: 707



View Profile
February 18, 2015, 02:03:37 PM
 #336

I, for one, would recommend avoiding all fancy logic for an adaptive block size; just set the maximum block size to the API maximum and be done with it.  I also recommend hurrying up to figure out the code to fragment/reconstruct blocks bigger than that.
What is the point of this? The contiguous "block" is just an artifact of the current implementation and will soon be obsoleted.
On the storage side the database schema will change to support pruning. On the network side the protocol will be updated to support more efficient transmission via IBLT and other mechanisms that reduce duplication on the wire. In both cases the "blocks" will never appear anywhere as a contiguous blobs of bytes.
If one wants/needs to transmit a block of transactions that a miner has discovered that meets the required difficulty then one calls an API to do it.  That API has a maximum size.  If the block is larger than that size then the block is chopped into pieces and then reassembled at the receiving end.  Whether the block is held in a single contiguous buffer is irrelevant although almost certainly common.

The point is until the code do the chopping and reconstruction is ready blocks are limited in size to the API maximum.  Given a large enough sustained workload, i.e. incoming transactions, the backlog will grow without bound until there's a failure.  Having Bitcoin fail would not be good.
Cryddit
Legendary
*
Offline Offline

Activity: 840


View Profile
February 18, 2015, 07:25:50 PM
 #337

At this time the software handles blocks up to 4GB and has been extensively tested with blocks up to 20MB.  So there's not really a problem in terms of API, nor a need to chop discovered batches of transactions up into bits.

The problem is sort-of political; there are a bunch of people who keep yelling over and over that an increase in block size will lead to more government control of the bitcoin ecosystem (as though there was ever going to be an economically significant bitcoin ecosystem that didn't have exactly that degree of government control) and that they don't want just any riffraff in a faraway country to be able to use their sacred blockchain for buying a cuppa coffee (as though allowing anybody anywhere to buy anything at any time somehow isn't the point of a digital cash system).

Neither point makes any damn sense; As I read the opposition they fall into three groups, the Trolls, the Suckers, and the Scammers.

The Trolls know the arguments are nonsense and are yelling anyway because it makes them feel important. 

The Suckers don't know nonsense when they hear it and are yelling because they're part of a social group with people who are yelling.

The Scammers have thought of a way to make a profit ripping people off during or after a hard fork, but it won't work unless there are Suckers who think that the coins on the losing side of the fork aren't worthless, so they keep yelling things to try to keep the Suckers confused as to the facts.

David Rabahy
Hero Member
*****
Offline Offline

Activity: 707



View Profile
February 19, 2015, 04:32:17 PM
 #338

At this time the software handles blocks up to 4GB ...
Ah, that's plenty big for now; has it been tested?  If blocks that large can reliably be put through consistently then that would easily handle 10k transactions/second.
Cryddit
Legendary
*
Offline Offline

Activity: 840


View Profile
February 19, 2015, 08:13:06 PM
 #339

At this time the software handles blocks up to 4GB ...
Ah, that's plenty big for now; has it been tested?  If blocks that large can reliably be put through consistently then that would easily handle 10k transactions/second.

As I said, 20MB has been extensively tested.  4GB is okay with the software but subject to practical problems like propagation delays taking too long for blocks crossing the network.  4GB blocks would be fine if everybody had enough bandwidth to transmit and receive them before the next block came around, but not everybody does. Right now I wouldn't expect the peer-to-peer network to be stable with blocks much bigger than about 200MB, given current bandwidth limitations on most nodes. 
2112
Legendary
*
Offline Offline

Activity: 1988



View Profile
February 19, 2015, 08:18:24 PM
 #340

Right now I wouldn't expect the peer-to-peer network to be stable with blocks much bigger than about 200MB, given current bandwidth limitations on most nodes. 
Are you talking about the legacy peer-to-peer protocol in Bitcoin Core or about the new, sensible implementation from Matt Corallo?

https://bitcointalk.org/index.php?topic=766190.0

Please comment, critique, criticize or ridicule BIP 2112: https://bitcointalk.org/index.php?topic=54382.0
Long-term mining prognosis: https://bitcointalk.org/index.php?topic=91101.0
Pages: « 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 [17] 18 19 20 21 22 23 24 »  All
  Print  
 
Jump to:  

Sponsored by , a Bitcoin-accepting VPN.
Powered by MySQL Powered by PHP Powered by SMF 1.1.19 | SMF © 2006-2009, Simple Machines Valid XHTML 1.0! Valid CSS!