Bitcoin Forum
May 08, 2024, 06:43:34 PM *
News: Latest Bitcoin Core release: 27.0 [Torrent]
 
   Home   Help Search Login Register More  
Pages: « 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 [17] 18 19 20 21 22 23 24 »  All
  Print  
Author Topic: Permanently keeping the 1MB (anti-spam) restriction is a great idea ...  (Read 104993 times)
tvbcof
Legendary
*
Offline Offline

Activity: 4592
Merit: 1276


View Profile
February 14, 2015, 12:05:29 PM
 #321

...
The original client did have a 33.5MB constraint on a message length.   It could be viewed as an implicit limit on block sizes as the current protocol transmits complete blocks as a single message.  There is nothing to indicate that Satoshi either intended this to be permanent or that it was a foundational part of the protocol.  It is a simplistic constraint that prevents an attack were a malicious or bugged client sends nodes an incredibly long message which needs to be received before it can be processed and invalidated.  Imagine your client having to download an 80TB message before it could determine that the message was invalid and then doing that a dozen times before banning that node.  Sanity checks are always a good idea to remove edge cases.

What's the problem with that?  You've never heard of Moore's law?  It's the universal faith-based psycho-engineering solution in these parts, and it seems to work fine on a lionshare of the participants.  Normally simply uttering the two words is sufficient to win any argument (hollow though the victory may be.)


sig spam anywhere and self-moderated threads on the pol&soc board are for losers.
1715193814
Hero Member
*
Offline Offline

Posts: 1715193814

View Profile Personal Message (Offline)

Ignore
1715193814
Reply with quote  #2

1715193814
Report to moderator
The block chain is the main innovation of Bitcoin. It is the first distributed timestamping system.
Advertised sites are not endorsed by the Bitcoin Forum. They may be unsafe, untrustworthy, or illegal in your jurisdiction.
1715193814
Hero Member
*
Offline Offline

Posts: 1715193814

View Profile Personal Message (Offline)

Ignore
1715193814
Reply with quote  #2

1715193814
Report to moderator
1715193814
Hero Member
*
Offline Offline

Posts: 1715193814

View Profile Personal Message (Offline)

Ignore
1715193814
Reply with quote  #2

1715193814
Report to moderator
1715193814
Hero Member
*
Offline Offline

Posts: 1715193814

View Profile Personal Message (Offline)

Ignore
1715193814
Reply with quote  #2

1715193814
Report to moderator
DooMAD
Legendary
*
Offline Offline

Activity: 3780
Merit: 3112


Leave no FUD unchallenged


View Profile
February 15, 2015, 12:00:56 PM
 #322

"anti-fork" people, or should I say "pro-bitcoin" people, really sound a lot saner to me in their phrasing, there is some tangible dementia and hostility in most of the "lets fork bitcoin" folks.

You want to point fingers about hostility?  I think you'll find this discussion started because MP accused Gavin of being a scammer and saying there was no way under any circumstances that he would accept a fork.  If we had set off on a more polite note, something along the lines of "let's weigh up the pros and cons and come to some sort of compromise", then maybe things would have gone a little differently.

.
.HUGE.
▄██████████▄▄
▄█████████████████▄
▄█████████████████████▄
▄███████████████████████▄
▄█████████████████████████▄
███████▌██▌▐██▐██▐████▄███
████▐██▐████▌██▌██▌██▌██
█████▀███▀███▀▐██▐██▐█████

▀█████████████████████████▀

▀███████████████████████▀

▀█████████████████████▀

▀█████████████████▀

▀██████████▀▀
█▀▀▀▀











█▄▄▄▄
▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀
.
CASINSPORTSBOOK
▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄
▀▀▀▀█











▄▄▄▄█
flound1129
Hero Member
*****
Offline Offline

Activity: 938
Merit: 1000


www.multipool.us


View Profile
February 16, 2015, 07:39:20 AM
 #323

If the block size remains at 1 MB, most of humanity will be forced by necessity to transact off chain.

(or on another chain)

Multipool - Always mine the most profitable coin - Scrypt, X11 or SHA-256!
flound1129
Hero Member
*****
Offline Offline

Activity: 938
Merit: 1000


www.multipool.us


View Profile
February 16, 2015, 07:41:15 AM
 #324

However, if block size is increased, there's really no reason why most miners won't include as many transactions as possible, since it doesn't really cost them anything. Transactors will no longer be required to pay to have their transactions included in the blockchain, and eventually profit-seeking miners will leave.

It costs mining pools nothing *now* to process transactions.  And in fact they are not even required to fill blocks up with transactions, there are empty blocks produced all the time.

Minimum fees can be set regardless of block size.

Multipool - Always mine the most profitable coin - Scrypt, X11 or SHA-256!
turvarya
Hero Member
*****
Offline Offline

Activity: 714
Merit: 500


View Profile
February 16, 2015, 09:09:58 AM
 #325

However, if block size is increased, there's really no reason why most miners won't include as many transactions as possible, since it doesn't really cost them anything. Transactors will no longer be required to pay to have their transactions included in the blockchain, and eventually profit-seeking miners will leave.

It costs mining pools nothing *now* to process transactions.  And in fact they are not even required to fill blocks up with transactions, there are empty blocks produced all the time.

Minimum fees can be set regardless of block size.
I don't think, that is true. Looking at the last blocks, they all had transactions in them.
Could you show me one of this recent 0 transaction blocks?

https://forum.bitcoin.com/
New censorship-free forum by Roger Ver. Try it out.
Buffer Overflow
Legendary
*
Offline Offline

Activity: 1652
Merit: 1015



View Profile
February 16, 2015, 11:04:49 AM
 #326

However, if block size is increased, there's really no reason why most miners won't include as many transactions as possible, since it doesn't really cost them anything. Transactors will no longer be required to pay to have their transactions included in the blockchain, and eventually profit-seeking miners will leave.

It costs mining pools nothing *now* to process transactions.  And in fact they are not even required to fill blocks up with transactions, there are empty blocks produced all the time.

Minimum fees can be set regardless of block size.
I don't think, that is true. Looking at the last blocks, they all had transactions in them.
Could you show me one of this recent 0 transaction blocks?

Yeah it does happen occasionally. Though of course they have the 1 transaction, which is the coinbase one.

DeathAndTaxes (OP)
Donator
Legendary
*
Offline Offline

Activity: 1218
Merit: 1079


Gerald Davis


View Profile
February 16, 2015, 04:37:41 PM
 #327

I don't think, that is true. Looking at the last blocks, they all had transactions in them.
Could you show me one of this recent 0 transaction blocks?

Yeah it does happen occasionally. Though of course they have the 1 transaction, which is the coinbase one.

That is correct.  There are no zero txn blocks because a block is invalid without a coinbase so people saying 'empty' blocks are referring to a block with just the coinbase txn.  This will occur periodically due to the way that blocks are constructed.  To understand why one needs to dig a little bit into what happens when a new block is found.  

A pool server may have thousands of workers working on the current block when a new block is found making that work stale. The server needs to quickly update all its workers to 'new work' and the longer that happens the more revenue that is lost.  If a new block is found on average in 600 seconds and it takes the pool just 6 seconds to update all its workers than its actual revenue will be 0.5% lower than its theoretical revenue.   So pools want to update all their workers as quickly as possible to be as efficient as possible.

To update workers to work on a new block the server must remove all the txn confirmed in the prior block from the memory pool, then organize them into a merkle tree and compute a new merkle root.  Part of that merkle tree is the coinbase txn which is unique for each worker (that is how your pool knows your shares are yours).  A different coinbase means a different merkle tree and root hash for each worker which can be time consuming to compute.  So to save time (and reduce stale losses) the pool server will compute the simplest possible merkle tree for each worker which is a single txn (the coinbase) push that out to all workers.  Compute the full txn set merkle tree and then provide that to workers once they request new work.  However if a worker solves a block with that first work assigned it will produce an 'empty' (technically one txn) block.

This is just one example of how a 1MB limit will never achieve 1MB throughput.  All the numbers in the OP assume an upper limit which is the theoretical situation of 100% of miners producing 1MB blocks with no orphans, inefficiencies, or stale work.  In reality miners targeting less than 1MB, orphans, and inefficiencies in the network will mean real throughput is going to be lower than the limit.  VISA only has an average txn capacity of 2,000 tps but their network can handle a peak traffic of 24,000 tps.  Nobody designs a system with a specific limit and then assumes throughput will be equal to that upper limit.

Cryddit
Legendary
*
Offline Offline

Activity: 924
Merit: 1129


View Profile
February 16, 2015, 07:11:38 PM
 #328

VISA only has an average txn capacity of 2,000 tps but their network can handle a peak traffic of 24,000 tps.  Nobody designs a system with a specific limit and then assumes throughput will be equal to that upper limit.

That is a very valuable observation.  An 'adaptive' block size limit would set a limit of some multiple of the observed transaction rate, but most of its advocates (including me) haven't bothered to look up what the factor ought to be. what you looked up above presents real live information from a functioning payment system. 

The lesson being that an acceptable peak txn rate for a working payment network is about 12x its average txn rate. 

Which, in our case with average blocks being around 300 KB, means we ought to have maximum block sizes in the 3600KB range. 

And that those of us advocating a self-adjusting block size limit ought to be thinking in terms of 12x the observed utilization, not 3x the observed utilization.

David Rabahy
Hero Member
*****
Offline Offline

Activity: 709
Merit: 503



View Profile
February 17, 2015, 03:06:52 PM
 #329

The ability to handle a large block is a functional topic; e.g. apparently there is an API limit which will force blocks to be transmitted in fragments instead of one large block.  If we want blocks larger than this limit then we have no choice but to do the code to handle fragmenting and reconstructing such large blocks.  Having an artificial block size maximum at this API limit is necessary until we do that code.  Alternatively I suppose we could look at replacing the API with another one (if there even is one) that can handle larger blocks.

The desire/need for large blocks is driven by the workload.  If the workload is too much for the block size then the backlog https://blockchain.info/unconfirmed-transactions will grow and grow until the workload subsides; this is a trivial/obvious result from queuing theory.  Well, I suppose having some age limit or other arbitrary logic dropping work would avoid the ever-growing queue but folks would just resubmit their transactions.  Granted some would argue the overload condition is ok since it will force some behavior.

I, for one, would recommend avoiding all fancy logic for an adaptive block size; just set the maximum block size to the API maximum and be done with it.  I also recommend hurrying up to figure out the code to fragment/reconstruct blocks bigger than that.

All of this is largely independent of figuring out how to suppress spam transactions.
2112
Legendary
*
Offline Offline

Activity: 2128
Merit: 1068



View Profile
February 17, 2015, 05:01:09 PM
 #330

I, for one, would recommend avoiding all fancy logic for an adaptive block size; just set the maximum block size to the API maximum and be done with it.  I also recommend hurrying up to figure out the code to fragment/reconstruct blocks bigger than that.
What is the point of this? The contiguous "block" is just an artifact of the current implementation and will soon be obsoleted.
On the storage side the database schema will change to support pruning. On the network side the protocol will be updated to support more efficient transmission via IBLT and other mechanisms that reduce duplication on the wire. In both cases the "blocks" will never appear anywhere as a contiguous blobs of bytes.

Please comment, critique, criticize or ridicule BIP 2112: https://bitcointalk.org/index.php?topic=54382.0
Long-term mining prognosis: https://bitcointalk.org/index.php?topic=91101.0
David Rabahy
Hero Member
*****
Offline Offline

Activity: 709
Merit: 503



View Profile
February 18, 2015, 02:03:37 PM
 #331

I, for one, would recommend avoiding all fancy logic for an adaptive block size; just set the maximum block size to the API maximum and be done with it.  I also recommend hurrying up to figure out the code to fragment/reconstruct blocks bigger than that.
What is the point of this? The contiguous "block" is just an artifact of the current implementation and will soon be obsoleted.
On the storage side the database schema will change to support pruning. On the network side the protocol will be updated to support more efficient transmission via IBLT and other mechanisms that reduce duplication on the wire. In both cases the "blocks" will never appear anywhere as a contiguous blobs of bytes.
If one wants/needs to transmit a block of transactions that a miner has discovered that meets the required difficulty then one calls an API to do it.  That API has a maximum size.  If the block is larger than that size then the block is chopped into pieces and then reassembled at the receiving end.  Whether the block is held in a single contiguous buffer is irrelevant although almost certainly common.

The point is until the code do the chopping and reconstruction is ready blocks are limited in size to the API maximum.  Given a large enough sustained workload, i.e. incoming transactions, the backlog will grow without bound until there's a failure.  Having Bitcoin fail would not be good.
Cryddit
Legendary
*
Offline Offline

Activity: 924
Merit: 1129


View Profile
February 18, 2015, 07:25:50 PM
 #332

At this time the software handles blocks up to 4GB and has been extensively tested with blocks up to 20MB.  So there's not really a problem in terms of API, nor a need to chop discovered batches of transactions up into bits.

The problem is sort-of political; there are a bunch of people who keep yelling over and over that an increase in block size will lead to more government control of the bitcoin ecosystem (as though there was ever going to be an economically significant bitcoin ecosystem that didn't have exactly that degree of government control) and that they don't want just any riffraff in a faraway country to be able to use their sacred blockchain for buying a cuppa coffee (as though allowing anybody anywhere to buy anything at any time somehow isn't the point of a digital cash system).

Neither point makes any damn sense; As I read the opposition they fall into three groups, the Trolls, the Suckers, and the Scammers.

The Trolls know the arguments are nonsense and are yelling anyway because it makes them feel important. 

The Suckers don't know nonsense when they hear it and are yelling because they're part of a social group with people who are yelling.

The Scammers have thought of a way to make a profit ripping people off during or after a hard fork, but it won't work unless there are Suckers who think that the coins on the losing side of the fork aren't worthless, so they keep yelling things to try to keep the Suckers confused as to the facts.

David Rabahy
Hero Member
*****
Offline Offline

Activity: 709
Merit: 503



View Profile
February 19, 2015, 04:32:17 PM
 #333

At this time the software handles blocks up to 4GB ...
Ah, that's plenty big for now; has it been tested?  If blocks that large can reliably be put through consistently then that would easily handle 10k transactions/second.
Cryddit
Legendary
*
Offline Offline

Activity: 924
Merit: 1129


View Profile
February 19, 2015, 08:13:06 PM
 #334

At this time the software handles blocks up to 4GB ...
Ah, that's plenty big for now; has it been tested?  If blocks that large can reliably be put through consistently then that would easily handle 10k transactions/second.

As I said, 20MB has been extensively tested.  4GB is okay with the software but subject to practical problems like propagation delays taking too long for blocks crossing the network.  4GB blocks would be fine if everybody had enough bandwidth to transmit and receive them before the next block came around, but not everybody does. Right now I wouldn't expect the peer-to-peer network to be stable with blocks much bigger than about 200MB, given current bandwidth limitations on most nodes. 
2112
Legendary
*
Offline Offline

Activity: 2128
Merit: 1068



View Profile
February 19, 2015, 08:18:24 PM
 #335

Right now I wouldn't expect the peer-to-peer network to be stable with blocks much bigger than about 200MB, given current bandwidth limitations on most nodes. 
Are you talking about the legacy peer-to-peer protocol in Bitcoin Core or about the new, sensible implementation from Matt Corallo?

https://bitcointalk.org/index.php?topic=766190.0

Please comment, critique, criticize or ridicule BIP 2112: https://bitcointalk.org/index.php?topic=54382.0
Long-term mining prognosis: https://bitcointalk.org/index.php?topic=91101.0
zebedee
Donator
Hero Member
*
Offline Offline

Activity: 668
Merit: 500



View Profile
February 22, 2015, 03:03:10 AM
 #336

As I read the opposition they fall into three groups, the Trolls, the Suckers, and the Scammers.

The Trolls know the arguments are nonsense and are yelling anyway because it makes them feel important. 

The Suckers don't know nonsense when they hear it and are yelling because they're part of a social group with people who are yelling.

The Scammers have thought of a way to make a profit ripping people off during or after a hard fork, but it won't work unless there are Suckers who think that the coins on the losing side of the fork aren't worthless, so they keep yelling things to try to keep the Suckers confused as to the facts.
lol, so true!
BusyBeaverHP
Full Member
***
Offline Offline

Activity: 209
Merit: 100


View Profile
March 13, 2015, 03:43:46 AM
 #337

I just wanted to bump this great article for its lucid explanation of why the blockchain needs to grow.
thy
Hero Member
*****
Offline Offline

Activity: 685
Merit: 500


View Profile
March 14, 2015, 12:48:49 PM
 #338

OP Very interesting topic DeathAndTaxes, where are we at the moment, whats the avg number of transactions per second for lets say the last month and what was the avg 6 months ago and a year ago as a comparison ?

2**256-2**32
Newbie
*
Offline Offline

Activity: 2
Merit: 0


View Profile
March 14, 2015, 01:13:56 PM
Last edit: March 14, 2015, 01:56:53 PM by 2**256-2**32
 #339

At this time the software handles blocks up to 4GB and has been extensively tested with blocks up to 20MB.  So there's not really a problem in terms of API, nor a need to chop discovered batches of transactions up into bits.

The maximum message size of the p2p protocol is 32MB, which caps the block size implicitly in addition to the explicit 1MB limit.

Beyond all other insanity, 4GB blocks would be completely worthless due to the sigops limit.
adworker
Full Member
***
Offline Offline

Activity: 190
Merit: 100



View Profile
March 14, 2015, 03:00:43 PM
 #340

Very long read OP, but well worth it. As I see some old coins like Litecoins have effectively 4MB limit for 10 minutes, so I dont understand why the 1MB limit for 10 minutes defend so many people (looking at the 20MB Gavin fork pool)

▄▄▄▄▄▄▄▄▄▄▄ ▄ ■        SKYNET        ■ ▄ ▄▄▄▄▄▄▄▄▄▄▄
▐▬▬▬▬▬▬▬▬▬▬     PRIVATE SALE is LIVE     ▬▬▬▬▬▬▬▬▬▬▌
Whitepaper   Bounty   Bitcointalk   ■   Facebook   Twitter   Telegram
Pages: « 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 [17] 18 19 20 21 22 23 24 »  All
  Print  
 
Jump to:  

Powered by MySQL Powered by PHP Powered by SMF 1.1.19 | SMF © 2006-2009, Simple Machines Valid XHTML 1.0! Valid CSS!