Bitcoin Forum
November 07, 2024, 02:13:27 AM *
News: Latest Bitcoin Core release: 28.0 [Torrent]
 
   Home   Help Search Login Register More  
Pages: [1]
  Print  
Author Topic: Bitcoin can only process 7 transactions per second??  (Read 7050 times)
Kluge (OP)
Donator
Legendary
*
Offline Offline

Activity: 1218
Merit: 1015



View Profile
May 14, 2014, 12:08:48 PM
 #1

I must be missing something here.

According to the Bitcoin Wiki:
VISA can do 8,500 tps.
Paypal can do 100 tps.
Bitcoin can do 7 tps.

A block can contain 1MB of data. 1MB is the equivalent of 1,048,576 bytes.

Stealing from DH's post:
Inputs tend to be approximately 180 bytes, outputs tend to be approximately 40 bytes.  There are some additional bytes for overhead in the transaction (transactionID, input_qty, output_qty, etc).  A safe assumption would be 50 bytes of overhead.

Sendtoaddress=(40+180+50) bytes
Sendmany=(40n+180+50) bytes, where n is # of outputs

1,048,576-180-50=1,048,346 (block size remaining after overhead & input)
1,048,346/40=26,209 (#transactions able to be included in block assuming one input)
Block target conf. time = 600 seconds.
26,209/600=43.68 transactions per second

Will be lower, maybe 30-35tps - payment processors like BitPay will probably not be able to always send from just one input. However, they're really the only ones in the position to "bulk-pay" outside online wallet services. They also have the ability to design their services to prevent low-value cash-outs by attaching service fees if the person doesn't meet some threshold -- say you arbitrarily charge BTC.001 if the withdrawal request is for less than BTC.02.

Payment processors and online wallet services are in a unique to dramatically increase Bitcoin's tps by dramatically increasing network efficiency, allowing more transactions per block, decreasing fees for users, increasing the value of a kilobyte with regards to mining fees (BitPay sure wouldn't want to miss a block affecting 5,000 customers), and decreasing associated costs of an unnecessarily enormous blockchain (bandwidth, storage, processing) and putting off the "we gotta increase the block size again" debate for another couple years.
Foxpup
Legendary
*
Offline Offline

Activity: 4532
Merit: 3183


Vile Vixen and Miss Bitcointalk 2021-2023


View Profile
May 14, 2014, 12:39:21 PM
 #2

Correction: a block can currently contain 1MB of data. Originally, there was no limit, but at some point the 1MB limit was added to prevent DOS attacks. It was never intended to be permanent and can be increased or removed entirely if necessary (I think Gavin said the limit would be raised once pruning is implemented).

Will pretend to do unspeakable things (while actually eating a taco) for bitcoins: 1K6d1EviQKX3SVKjPYmJGyWBb1avbmCFM4
I am not on the scammers' paradise known as Telegram! Do not believe anyone claiming to be me off-forum without a signed message from the above address! Accept no excuses and make no exceptions!
Kluge (OP)
Donator
Legendary
*
Offline Offline

Activity: 1218
Merit: 1015



View Profile
May 14, 2014, 12:47:14 PM
 #3

Correction: a block can currently contain 1MB of data. Originally, there was no limit, but at some point the 1MB limit was added to prevent DOS attacks. It was never intended to be permanent and can be increased or removed entirely if necessary (I think Gavin said the limit would be raised once pruning is implemented).
It was 500kb prior. Maybe even lower when max size was first implemented... I wasn't around way back then.
Mike Hearn
Legendary
*
expert
Offline Offline

Activity: 1526
Merit: 1134


View Profile
May 14, 2014, 01:10:44 PM
 #4

The number is based on observed average tx size, not your guesstimate as to what it might be.

It's not really 7tps of course. It's less. You can't run the system maxed out all the time, that'd be unstable.
cr1776
Legendary
*
Offline Offline

Activity: 4214
Merit: 1313


View Profile
May 14, 2014, 01:25:18 PM
 #5

This thread discusses the issue too - with Satoshi weighing in:

https://bitcointalk.org/index.php?topic=1347.0

There are other ones too, but that one is interesting.
Kluge (OP)
Donator
Legendary
*
Offline Offline

Activity: 1218
Merit: 1015



View Profile
May 14, 2014, 01:51:14 PM
 #6

The number is based on observed average tx size, not your guesstimate as to what it might be.

It's not really 7tps of course. It's less. You can't run the system maxed out all the time, that'd be unstable.
If the maximum block size were left alone, isn't it reasonable to assume payment processors (along with other efficiency solutions like CoinJoin) will effectively increase network efficiency (more transactions per kb, less strain on full nodes for the same number of transactions), especially as the "minimum" fee per transaction increases to have a high probability of being included into the next block?

Maybe it's experimentally 7tps, but that's a measurement based on how users are currently behaving moreso than the actual code, right? VISA does everything on its own with very standard scripts - there's no well-known way for customers to use credit a certain way to increase tps on VISA's network. With Bitcoin, though, users' behavior may be able to dramatically increase tps, and higher "effective minimum" fees (to likely be included in the next block) could effectively increase the tps.
Sukrim
Legendary
*
Offline Offline

Activity: 2618
Merit: 1007


View Profile
May 14, 2014, 04:00:11 PM
 #7

With Bitcoin, though, users' behavior may be able to dramatically increase tps

https://en.bitcoin.it/wiki/Maximum_transaction_rate

Quote
For 1MB (1,000,000 byte) blocks this implies a theoretical maximum rate of 10tx/s.

While it is an impressive increase of nearly 50% if everybody cooperates to create minimum sized transactions, compared to other systems, 10 TX/s is still not that impressive...

https://www.coinlend.org <-- automated lending at various exchanges.
https://www.bitfinex.com <-- Trade BTC for other currencies and vice versa.
DannyHamilton
Legendary
*
Offline Offline

Activity: 3486
Merit: 4801



View Profile
May 14, 2014, 07:03:45 PM
 #8

I must be missing something here.

You are.

A block can contain 1MB of data. 1MB is the equivalent of 1,048,576 bytes.

Stealing from DH's post:
Inputs tend to be approximately 180 bytes, outputs tend to be approximately 40 bytes.  There are some additional bytes for overhead in the transaction (transactionID, input_qty, output_qty, etc).  A safe assumption would be 50 bytes of overhead.

Sendtoaddress=(40+180+50) bytes
Sendmany=(40n+180+50) bytes, where n is # of outputs

1,048,576-180-50=1,048,346 (block size remaining after overhead & input)

You are only subtracting 1 input and 1 transaction overhead.

You need:
1,048,576-180ni-50n = Huh

Where n is the number of transactions in the block, and i is the average number of inputs per transaction.

1,048,346/40=26,209 (#transactions able to be included in block assuming one input)

You have not calculated the number of transactions that are able to be included in a block assuming one input.  You have calculated the number of outputs a single transaction can have if the entire block has only 1 transaction and that transaction has only 1 input.

Each transaction will have at least 1 input.  As such, you've calculated how many outputs a single sender can have if there is only 1 transaction every 10 minutes.

Block target conf. time = 600 seconds.
26,209/600=43.68 transactions per second

That's not a valid number at all.  I suppose it's a bit like calculating how many addresses per second a single sender can send to if they include all the outputs into a single large transaction that they create once every 10 minutes.
Kluge (OP)
Donator
Legendary
*
Offline Offline

Activity: 1218
Merit: 1015



View Profile
May 15, 2014, 02:09:00 AM
Last edit: May 15, 2014, 02:26:07 AM by Kluge
 #9

I've totally butchered my thought process, here. I know what I mean, but I'm really sucking at explaining this (sorry), and then I threw in the CoinJoin example which isn't directly related, but something else entirely. I was, at first, thinking about gambling sites. Generally, you buy your chips (off-blockchain bitcoin IOUs), do a bunch of micro-transactions off the blockchain on the gambling site, then cash out (or, more likely, do nothing because you're out of IOUs). You have relatively "chunky" inputs instead of a bunch of micro-transactions floating around - but in a higher-fee future, it may be reasonable to want larger everyday purchases off the blockchain, too.

Payment processors and online wallets should be similar -- similar to Paypal in this regard, too (where it works best if it's pre-funded). Blockchain.info or whatever would hold relatively large inputs - and I'd be very surprised if they don't currently hold relatively large inputs compared to all funded inputs - because you don't want to keep using a client for every little purchase. You don't want to deal with the time and effort, and you don't want to pay unnecessary fees, so you put maybe 10 "normal purchases" worth of bitcoin into this online wallet.

[Separate but related idea] Assuming there are a bunch of companies willing to accept blockchain.info IOUs instead of bitcoins, you do not need (and probably don't want) these kinds of everyday transactions on the blockchain, and when the merchant receives the coins, they probably aren't going to immediately cash these IOUs out for coins. Instead, they'll probably wait until, say, 50 purchases have been made, or a week elapses, or whatever other kind of procedure they've made to find a balance between the drawbacks of working on the blockchain and the risks of working off -- so you have "chunky" outputs as well as "chunky" inputs.

[Another separate idea] What if fees start ballooning due to use growth while block size isn't increased? The market's going to correct for this, right? Nobody wants to pay a $1-equiv fee for the privilege of being able to buy a cup of coffee at the gas station, but with off-chain transactions, they don't have to, so I'd guess they'll start moving toward those redeemable IOUs, where they can pay far, far lower fees while still being able to redeem their coins if they so choose - but they'll probably be inclined to wait for a number of off-blockchain inputs before they redeem them for a "chunky" output of controlled coins to their privkey, and in that case, assuming the previous given ideas are legit, you'd end up with relatively chunky inputs and outputs in sendmany transactions from a few major players in the market (payment processors, online wallets).

Obviously, off-chain transactions increase the "tps" of "Bitcoin" because they're just shuffling IOUs around on a server like Paypal or VISA, but it also can "actually" increase the tps (and by tps, I'm referring to the number of unique outputs, not the number of "actual transactions" since Bitcoin isn't bound to only do one transaction per "transaction") of Bitcoin by encouraging the bundling of transactions into sendmany's with larger (relative to now) inputs and outputs, maybe very significantly. Tx fees encourage that kind of bundling, which allows Bitcoin to effectively fit more transactions in a block while also decreasing the associated costs full nodes bear for an overly-bloated blockchain. The idea here was that the stated max tps is affected (and I'd suggest significantly) by behavior not discouraged by the liberal max block size and continual increases, when a higher tps (and all the savings which go along with that increased efficiency) would probably be achieved with conservative adjustments to max block size.
justusranvier
Legendary
*
Offline Offline

Activity: 1400
Merit: 1013



View Profile
May 15, 2014, 02:15:16 AM
 #10

Correction: a block can currently contain 1MB of data. Originally, there was no limit, but at some point the 1MB limit was added to prevent DOS attacks. It was never intended to be permanent and can be increased or removed entirely if necessary (I think Gavin said the limit would be raised once pruning is implemented).
It was 500kb prior. Maybe even lower when max size was first implemented... I wasn't around way back then.
The first version of the client had no explicit limit for the size of a block, but there was a 32 MB message size limit so that was the effective max block size. Then as soon as there was an explicit block size limit defined in the code it was set to 1 MB.

500kb was a soft limit, not a protocol limit.
jdbtracker
Hero Member
*****
Offline Offline

Activity: 727
Merit: 500


Minimum Effort/Maximum effect


View Profile
May 15, 2014, 02:00:11 PM
Last edit: May 15, 2014, 02:13:09 PM by jdbtracker
 #11

It's definitely possible to increase the TPS infinitely with mini networks fletcheting bundled transactions through Bitcoin, we have the technology.  As far as I can tell the system was designed with the 1mb limit to spur innovation in cryptography, schemes, and programming, tight programming practices like those that were common back in the 80's; Did you know that the first Mechwarrior ran on 600kb of ram, now that is some efficient programming. I mean, I'm surprised that I can't run a full linux distro on my phone even though it exceeds what the minimum requirements were for linux back in 2005: 2.5ghz quad core cpu/2gb ram/16gb storage.

This artificial limit lead to what we have now, the development of coloured coins, master coin, escrow/scrypting systems, tighter more efficient cryptographic ciphers, and looking at the code, Satoshi wrote it so any changes to that limit would break the client off the network, I am very sure this was intentional. I love Bitcoin because of all the innovation that has come, I keep learning new things every day on this forum. Smiley it's the reason I keep coming back.

You could say that Bitcoin is the backbone of a secure financial system, when you cash out at the end of the day, you run it through the Blockchain as a lump transaction and walla you have just created your own private account that hides your daily spending habits. I'm surprised no one has created secure mini networks(off-chain) for people to use, just one layer on top of their Bitcoin wallet to keep their financial information private.

(off chain transactons) are another idea born from the 1mb limit. Adversity creates opportunity.

In fact I am surprised no one has thought of mixing Bitcoin with Bitmessage in one wallet, using the Bitmessage protocol to communicate and exchange unsecured anonymous transactions(digital cash/chaumian blinding), but knowing the history of ideas, I'm sure someone on the forum has thought of this before. This could be leveraged to create a public/private wallet system where you can remain anonymous by using Bitmessage 3rd party accounts as escrows.

If you think my efforts are worth something; I'll keep on keeping on.
I don't believe in IQ, only in Determination.
dewdeded
Legendary
*
Offline Offline

Activity: 1232
Merit: 1011


Monero Evangelist


View Profile
May 15, 2014, 09:11:02 PM
 #12

Kluge: Make it easy for yourself. Just think about it this way, if a company like VISA can build a regular, operating 2000 tps-system (with up to  8,500 tps peaks) the bitcoin community can it too.
The best talents from the Distributed Systems and Distributed Computing in academic/science and general professionals are active/involved with Bitcoin and have vetted interest in building bitcoin network into handling x000 tps.
(because they e.g. support the bitcoin ideas/project, hold coins or academics/pros want to get fame for proposing the solution that made bitcoin better/scale, ...)

If it can be done (and it can, as this was already shown by VISA and MC) it will be done (by the bitcoin dev scene).


Last but not least, what should give you hope VISAs and MCs setup to handle such a load is also a distributed P2P-Network and no "top down"-approach with big central servers.
Altoidnerd
Sr. Member
****
Offline Offline

Activity: 406
Merit: 251


http://altoidnerd.com


View Profile WWW
May 16, 2014, 05:37:23 AM
 #13

Correction: a block can currently contain 1MB of data. Originally, there was no limit, but at some point the 1MB limit was added to prevent DOS attacks. It was never intended to be permanent and can be increased or removed entirely if necessary (I think Gavin said the limit would be raised once pruning is implemented).

Shouldn't the tx fee prevent DDOS attacks of this type?

Do you even mine?
http://altoidnerd.com 
12gKRdrz7yy7erg5apUvSRGemypTUvBRuJ
DeathAndTaxes
Donator
Legendary
*
Offline Offline

Activity: 1218
Merit: 1079


Gerald Davis


View Profile
May 16, 2014, 05:39:08 AM
 #14

Correction: a block can currently contain 1MB of data. Originally, there was no limit, but at some point the 1MB limit was added to prevent DOS attacks. It was never intended to be permanent and can be increased or removed entirely if necessary (I think Gavin said the limit would be raised once pruning is implemented).

Shouldn't the tx fee prevent DDOS attacks of this type?

Tx fee prevents another type of attack (namely spamming the memory pool) or at least raises the cost.    However there is no required tx fee so if there was no maximum block someone with sufficient mining power for example could make a 1TB block (all free txs made by the miner) and now the blockchain is >1TB.
justusranvier
Legendary
*
Offline Offline

Activity: 1400
Merit: 1013



View Profile
May 16, 2014, 04:31:35 PM
 #15

Tx fee prevents another type of attack (namely spamming the memory pool) or at least raises the cost.    However there is no required tx fee so if there was no maximum block someone with sufficient mining power for example could make a 1TB block (all free txs made by the miner) and now the blockchain is >1TB.
It's extremely likely that a 1 TB block would be orphaned well before the miner transmitted to his first peer.
DeathAndTaxes
Donator
Legendary
*
Offline Offline

Activity: 1218
Merit: 1079


Gerald Davis


View Profile
May 16, 2014, 05:07:07 PM
 #16

Tx fee prevents another type of attack (namely spamming the memory pool) or at least raises the cost.    However there is no required tx fee so if there was no maximum block someone with sufficient mining power for example could make a 1TB block (all free txs made by the miner) and now the blockchain is >1TB.
It's extremely likely that a 1 TB block would be orphaned well before the miner transmitted to his first peer.

It is just an example.  1TB was hyperbolic but multiple smaller (but still large) blocks over time could still accomplish the same goal.  A well connected attacker could analyze orphan and latency and choose the optimal size which produces the max blockchain growth over time.  An attacker with less than 51% of hashpower but looking to degrade Bitcoin wouldn't necessarily care about the cost.  An attacker with 1% of the hashrate would still have >500 attempts per year.  A larger attacker could optimize the attack by not broadcasting until he was at least 2 blocks ahead in a modified form of the selfish miner attack but with a goal to bloat the chain instead of gain increased rewards.

The 1MB limit will almost certainly be raised but using another sanity cap is a good idea.  Optimally it would be some floating cap which is deterministic and based on blockchain usage but that may take some time to develop and test.  In the interim raising to say 10MB gives the network breathing room while limiting the scope of an attack.
Peter R
Legendary
*
Offline Offline

Activity: 1162
Merit: 1007



View Profile
May 16, 2014, 07:53:39 PM
 #17


The 1MB limit will almost certainly be raised but using another sanity cap is a good idea.  Optimally it would be some floating cap which is deterministic and based on blockchain usage but that may take some time to develop and test.  In the interim raising to say 10MB gives the network breathing room while limiting the scope of an attack.



Would it be reasonable to calculate the floating cap each time difficulty is retarget (every 2016 blocks)?  You could set the max block size to, say, 4 x avg_block_size over the last 2016-block period. 


Also, perhaps it would be possible to further reduce orphan costs (to the extent that miners are cooperative) by establishing informal "best-practices" for filling each block.  The risk of an orphan to a particular miner is reduced when the blocksize variance is minimized.  When I look at the blocks roll in, it seems that miners are already working to minimize blocksize variance to a certain extent, but perhaps they could take it one step further.  It could be informally agreed upon, for example, to use a proportional feedback controller to determine the block_size for the current block you're working on:

   block_size = avg_block_size  +  K x (unconfirmed_transactions_kB - target_unconfirmed_transactions_kB)

where K is a gain parameter that would be loosely agreed upon.  Miners that aren't following these guidelines wouldn't be punished, but it would be clear to their hashpower providers what was going on.
 

Run Bitcoin Unlimited (www.bitcoinunlimited.info)
DeathAndTaxes
Donator
Legendary
*
Offline Offline

Activity: 1218
Merit: 1079


Gerald Davis


View Profile
May 16, 2014, 08:05:40 PM
Last edit: May 16, 2014, 08:17:03 PM by DeathAndTaxes
 #18


The 1MB limit will almost certainly be raised but using another sanity cap is a good idea.  Optimally it would be some floating cap which is deterministic and based on blockchain usage but that may take some time to develop and test.  In the interim raising to say 10MB gives the network breathing room while limiting the scope of an attack.



Would it be reasonable to calculate the floating cap each time difficulty is retarget (every 2016 blocks)?  You could set the max block size to, say, 4 x avg_block_size over the last 2016-block period.  

That is my general understanding however the specific multiplier and period length should have some serious analysis and debate.  There are other things to consider like should the algorithm be a one way ratchet (cap can only rise not fall).  What are the implications either way?  Should there be sanity limits (possibly in line with Moore's law) to the rate of growth that would strike a compromise between tx volume and the resources required to operate a full node.  Having one million txs per block reduces the need for centralized off chain services but if that results in only a handful of datacenters having the resources to run full nodes well that is its own kind of centralization.

This is why personally (unless someone convinces me otherwise) I think the best route is a one time fixed increase to the cap (say 5 to 10 MB block size), combined with a plan to have in place a deterministic algorithm before rising volume necessitates the need for another manual increase.

As for a target block size I am not sure if there is any value in that.  Pools are already starting to diverge in terms of their criteria for transaction selection.  Eligius for example tends to make the largest blocks but they also don't include non paying transactions.  I see nothing wrong with that and they likely would continue to build as they see fit regardless of what the target is.  If free tx volume rises and paying tx volume falls I don't think anyone should feel obligated to change their block sizes.  My understanding is that future versions of bitcoind will remove default values for block generation.  Miners will need to explicitly pick the values or bitcoind will produce an error.  Orphan costs can be reduced ~90% by changing the new block message format to include tx hashes instead of full transactions.
Peter R
Legendary
*
Offline Offline

Activity: 1162
Merit: 1007



View Profile
May 16, 2014, 08:34:59 PM
 #19

This is why personally (unless someone convinces me otherwise) I think the best route is a one time fixed increase to the cap (say 5 to 10 MB block size), combined with a plan to have in place a deterministic algorithm before rising volume necessitates the need for another manual increase.

I completely agree here.  Increase the cap once to 5 - 10 MB, and then later implement floating caps using a deterministic algorithm.


Quote
Orphan costs can be reduced ~90% by changing the new block message format to include tx hashes instead of full transactions.

I've read a bit about this idea.  A 90% reduction in block propagation time would be very helpful for reducing orphans.  In your opinion, is there much risk in making this change?  

Also, I understand that someone derived an equation that describes the impact of block propagation times on expected mining revenue.  I've done a couple of searches, but I've never found the original thread.  You wouldn't happen to know where this is, would you?

Run Bitcoin Unlimited (www.bitcoinunlimited.info)
asdf
Hero Member
*****
Offline Offline

Activity: 527
Merit: 500


View Profile
June 08, 2014, 05:39:20 AM
 #20

This is why personally (unless someone convinces me otherwise) I think the best route is a one time fixed increase to the cap (say 5 to 10 MB block size), combined with a plan to have in place a deterministic algorithm before rising volume necessitates the need for another manual increase.

I completely agree here.  Increase the cap once to 5 - 10 MB, and then later implement floating caps using a deterministic algorithm.


Quote
Orphan costs can be reduced ~90% by changing the new block message format to include tx hashes instead of full transactions.

I've read a bit about this idea.  A 90% reduction in block propagation time would be very helpful for reducing orphans.  In your opinion, is there much risk in making this change?  

Also, I understand that someone derived an equation that describes the impact of block propagation times on expected mining revenue.  I've done a couple of searches, but I've never found the original thread.  You wouldn't happen to know where this is, would you?

https://gist.github.com/gavinandresen/5044482
Pages: [1]
  Print  
 
Jump to:  

Powered by MySQL Powered by PHP Powered by SMF 1.1.19 | SMF © 2006-2009, Simple Machines Valid XHTML 1.0! Valid CSS!