Bitcoin Forum
March 28, 2024, 07:04:58 PM *
News: Latest Bitcoin Core release: 26.0 [Torrent]
 
   Home   Help Search Login Register More  
Pages: « 1 2 [3] 4 5 6 7 8 »  All
  Print  
Author Topic: Once again, what about the scalability issue?  (Read 11206 times)
DeathAndTaxes
Donator
Legendary
*
Offline Offline

Activity: 1218
Merit: 1063


Gerald Davis


View Profile
July 18, 2013, 11:29:53 PM
 #41

People, stop saying that scalability is not a problem and writing about how cheap hard drives are.
The scalability is the number one problem stopping Bitcoin from becoming mainstream.
It doesn't matter how fast the drives are growing, right now the blockchain keeps all the old information which is not even needed, and grows indefinitely, how hard is it to understand that it is a non-scalable non-future-friendly scheme?
I am sure the devs know this and are doing their best to address it and I am grateful for that. But saying that it's not a problem is just ignorant and stupid.

Quote
We won't get some real big transaction volume because of this issue.
I can't see how anybody is even arguing against this. I mean, it's even in the wiki: https://en.bitcoin.it/wiki/Scalability

The historical storage is a non-issue and the scalability page points that out.  Bandwidth (for CURRENT blocks) presents a much harder bottleneck to extreme transaction levels and after bandwidth comes memory as fast validation requires the UXTO to be cached in memory.  Thankfully dust rules will constrain the growth of the UXTO however both bandwidth and memory will be an issue much sooner and quicker than the storing the blockchain on disk.  

The the idea that today's transaction volume is held back because of the "massive" blockchain isn't supported by the facts.  Even the 1MB block limit provides for 7 tps and the current network isn't even 0.5 tps sustained.  We could see a 1,300% increase in transaction volume before even the 1MB limit became an issue.  At 1 MB per block the blockchain would grow by 50 GB per year.  It would take 20 years of maxed out 1MB blocks before the blockchain couldn't fit on an "ancient" (in the year 2033) 1TB drive.  

Beyond 1MB the storage requirements will grow but they will run up against memory and bandwidth long before disk space becomes too expensive.  Still as pointed out eventually most nodes will not maintain a copy of the full blockchain, that will be a task reserved for "archive nodes" and instead will just retain the block headers (which is ~4MB per year) and a deep enough section of the the recent blockchain.

so as far as addressing the bandwidth bottleneck problem you are in the off chain transaction camp correct?

No although I believe regardless off-chain tx will happen.  They happen right now.  Some people leave their BTC on MtGox and when they pay someone who also has a MtGox address it happens instantly, without fees, and off the blockchain.  Now imagine MtGox partners with an eWallet provider and both companies hold funds in reserve to cover transfers to each other's private books.  Suddenly you can now transfer funds

So off chain tx are going to happen regardless.

I was just pointing out between the four critical resources:
bandwidth
memory
processing power
storage

storage is so far behind the other ones that worrying about that is kinda silly.  We will hit walls in memory and banwidth at much lower tps then it would take before disk space became critical.  The good news is last mile bandwidth is still increasing (doubling every 18-24 months) however there is risk of centralization due to resources if tx volume grows beyond what the "average" node can handle.  If tx volume grows so fast that 99% of nodes simply can't maintain a full node because they lack sufficient bandwidth to keep up with the blockchain then you will see a lot of full nodes go offline and they is a risk that the network is now in the handles of a much smaller number of nodes (likely in datacenters with extreme high bandwidth links).  Since bandwidth is both the tightest bottleneck AND the one where many users have the least control over. As an example I recently paid $80 and doubled by workstation's ram to 16GB.  Lets say my workstation is viable for another 3 years.  $80/36 = ~3 per month.  Even if bitcoind today was memory constrained on 8GB systems I could bypass that bottleneck for a mere $3 a month.  I like Bitcoin, I want to see it work, I will gladly pay $3 to make sure it happens.  However I can't pay an extra $3 a month and double my upstream (and for residential connections that is the killer) bandwidth.  So hypothetically if Bitcoin wasn't memory or storage constrained by bandwidth constrained today I would be "stuck" I am either looking at much higher cost, or a need for more exotic solutions (like running my node on a server).

Yeah that was longer than I intended. 

TL/DR: Yes scalability will ALWAYS be an issue as long as tx volume is growing however storage is the least of our worries.  The point is also somewhat moot because eventually most nodes won't maintain full blocks back to the genesis block.  That will be reserved for "archive" nodes.  There likely will be fewer of them but as long as there are a sufficient number to maintain a decentralized consensus the network can be just as secure and users have a choice (full node, full headers & recent blocks, lite client) depending on their needs and risk.


1711652698
Hero Member
*
Offline Offline

Posts: 1711652698

View Profile Personal Message (Offline)

Ignore
1711652698
Reply with quote  #2

1711652698
Report to moderator
You get merit points when someone likes your post enough to give you some. And for every 2 merit points you receive, you can send 1 merit point to someone else!
Advertised sites are not endorsed by the Bitcoin Forum. They may be unsafe, untrustworthy, or illegal in your jurisdiction.
Anon136
Legendary
*
Offline Offline

Activity: 1722
Merit: 1217



View Profile
July 19, 2013, 12:04:44 AM
 #42

People, stop saying that scalability is not a problem and writing about how cheap hard drives are.
The scalability is the number one problem stopping Bitcoin from becoming mainstream.
It doesn't matter how fast the drives are growing, right now the blockchain keeps all the old information which is not even needed, and grows indefinitely, how hard is it to understand that it is a non-scalable non-future-friendly scheme?
I am sure the devs know this and are doing their best to address it and I am grateful for that. But saying that it's not a problem is just ignorant and stupid.

Quote
We won't get some real big transaction volume because of this issue.
I can't see how anybody is even arguing against this. I mean, it's even in the wiki: https://en.bitcoin.it/wiki/Scalability

The historical storage is a non-issue and the scalability page points that out.  Bandwidth (for CURRENT blocks) presents a much harder bottleneck to extreme transaction levels and after bandwidth comes memory as fast validation requires the UXTO to be cached in memory.  Thankfully dust rules will constrain the growth of the UXTO however both bandwidth and memory will be an issue much sooner and quicker than the storing the blockchain on disk.  

The the idea that today's transaction volume is held back because of the "massive" blockchain isn't supported by the facts.  Even the 1MB block limit provides for 7 tps and the current network isn't even 0.5 tps sustained.  We could see a 1,300% increase in transaction volume before even the 1MB limit became an issue.  At 1 MB per block the blockchain would grow by 50 GB per year.  It would take 20 years of maxed out 1MB blocks before the blockchain couldn't fit on an "ancient" (in the year 2033) 1TB drive.  

Beyond 1MB the storage requirements will grow but they will run up against memory and bandwidth long before disk space becomes too expensive.  Still as pointed out eventually most nodes will not maintain a copy of the full blockchain, that will be a task reserved for "archive nodes" and instead will just retain the block headers (which is ~4MB per year) and a deep enough section of the the recent blockchain.

so as far as addressing the bandwidth bottleneck problem you are in the off chain transaction camp correct?

No although I believe regardless off-chain tx will happen.  They happen right now.  Some people leave their BTC on MtGox and when they pay someone who also has a MtGox address it happens instantly, without fees, and off the blockchain.  Now imagine MtGox partners with an eWallet provider and both companies hold funds in reserve to cover transfers to each other's private books.  Suddenly you can now transfer funds

So off chain tx are going to happen regardless.

I was just pointing out between the four critical resources:
bandwidth
memory
processing power
storage

storage is so far behind the other ones that worrying about that is kinda silly.  We will hit walls in memory and banwidth at much lower tps then it would take before disk space became critical.  The good news is last mile bandwidth is still increasing (doubling every 18-24 months) however there is risk of centralization due to resources if tx volume grows beyond what the "average" node can handle.  If tx volume grows so fast that 99% of nodes simply can't maintain a full node because they lack sufficient bandwidth to keep up with the blockchain then you will see a lot of full nodes go offline and they is a risk that the network is now in the handles of a much smaller number of nodes (likely in datacenters with extreme high bandwidth links).  Since bandwidth is both the tightest bottleneck AND the one where many users have the least control over. As an example I recently paid $80 and doubled by workstation's ram to 16GB.  Lets say my workstation is viable for another 3 years.  $80/36 = ~3 per month.  Even if bitcoind today was memory constrained on 8GB systems I could bypass that bottleneck for a mere $3 a month.  I like Bitcoin, I want to see it work, I will gladly pay $3 to make sure it happens.  However I can't pay an extra $3 a month and double my upstream (and for residential connections that is the killer) bandwidth.  So hypothetically if Bitcoin wasn't memory or storage constrained by bandwidth constrained today I would be "stuck" I am either looking at much higher cost, or a need for more exotic solutions (like running my node on a server).

Yeah that was longer than I intended. 

TL/DR: Yes scalability will ALWAYS be an issue as long as tx volume is growing however storage is the least of our worries.  The point is also somewhat moot because eventually most nodes won't maintain full blocks back to the genesis block.  That will be reserved for "archive" nodes.  There likely will be fewer of them but as long as there are a sufficient number to maintain a decentralized consensus the network can be just as secure and users have a choice (full node, full headers & recent blocks, lite client) depending on their needs and risk.




ya i already knew all that Grin. i was just wondering how you thought the bandwidth bottleneck problem would be dealt with.

Rep Thread: https://bitcointalk.org/index.php?topic=381041
If one can not confer upon another a right which he does not himself first possess, by what means does the state derive the right to engage in behaviors from which the public is prohibited?
DeathAndTaxes
Donator
Legendary
*
Offline Offline

Activity: 1218
Merit: 1063


Gerald Davis


View Profile
July 19, 2013, 12:47:04 AM
 #43

ya i already knew all that Grin. i was just wondering how you thought the bandwidth bottleneck problem would be dealt with.

My guess is a lot depends on how much Bitcoin grows and how quickly.  Also bandwidth is less of an issue unless the developers decide to go to an unlimited block size in the near future.  Even a 5MB block cap would be fairly manageable. 

With the protocol now lets assume a well connected miner needs to transfer a block to peers in 3 seconds to remain competitive.  Say the average miner (with node on a hosted server) has 100 Mbps upload bandwidth and needs to send the block to 20 peers. (100 * 3) / (8 * 20) = 1.875 MB so we probably are fine "as is" up to a 2MB.  With avg tx being 250 bytes that carries us through to 10 to 15 tps (2*1024^2 / 250)

PayPal is roughly 100 tps and using bandwidth in the current inefficient manner would require an excessive amount of bandwidth.  Currently miners broadcast the transactions as part of the block but it isn't necessary, as it is likely peers already have the transaction.  Miners can increase the hit rate by broadcasting tx in the block to peers while the tx is being worked on).  If a peer already knows of the tx then for a block they just need the header (trivial bandwidth) and the list of transaction hashes.  A soft fork to the protocol could be made which allows the broadcasting of just header and tx hash list. If we assume the average tx is 250 bytes and the hash is 32 bytes this means a >80% reduction in bandwidth required during the block transmission window (assumed 3 seconds to remain competitive without excessive orphans).  

Note this doesn't eliminate the bandwidth necessary to relay tx but it makes more efficient use of bandwidth.  Rather than a giant spike in required bandwidth for 3-5 seconds every 600 sec and underutilized bandwidth the other 595 seconds it would even out the spikes getting more accomplished without higher latency.  At 100 tps a block would on average have 60,000 tx.  At 32 bytes each broadcast over 3 seconds to 20 peers would require ~100Mbps.  An almost 8x improvement in miner throughput without increasing latency or peak bandwidth.

For existing non-mining nodes it would be trivial to keep up.  Lets assume the average node relays a tx to 4 of their 8 peers. Nodes could use improved relay logic to check if a peer needs a block before relaying.   To keep up a node just needs to handle the tps plus the overhead of blocks without falling behind (i.e. one 60,000 block in 600 seconds).  Even with only 1Mbps upload it should be possible to keep up [ (100)*(250+32)*(Cool*(4) / 1024^2 < 1.0 ].

Now bootstrapping new nodes is a greater challenge.  The block headers are trivial (~4 MB per year) but it all depends on how big blocks are and how far back non-archive nodes will wan't/need to go.  The higher the tps relative to average node's upload bandwidth the longer it will take to boot strap a node to a given depth.



tvbcof
Legendary
*
Offline Offline

Activity: 4564
Merit: 1276


View Profile
July 19, 2013, 03:26:54 AM
 #44


Satoshi believed from day 1 that not every user would maintain a full node.  That is why his paper includes a section on SPV.  Decentralized doesn't have to mean every single human on the planet is an equal peer in a network covering all transactions for the human race.   tens of thousands or hundreds of thousands of nodes in a network used by millions or tens of millions provides sufficient decentralization that attacks to limit or exploit the network becomes infeasible.

Heh.  I'm still waiting for the bitcoin project to get honest and state that not all 'peers' are 'equal peers' in the 'p2p' network.  Somehow it seems not to be a priority.  Funny that.

It also would not hurt (from a perspective of truth in advertising) to stipulate that Bitcoin is 'eventually-deflationary', non-scalable, far from anonymous, and the fluff about blockchain pruning was either marketing BS or is de-prioritized (one suspects in order to assist in the formation of 'server-variety-peers' and shifting of non-commercial entities into the 'client-variety-peer' category.


sig spam anywhere and self-moderated threads on the pol&soc board are for losers.
DeathAndTaxes
Donator
Legendary
*
Offline Offline

Activity: 1218
Merit: 1063


Gerald Davis


View Profile
July 19, 2013, 03:44:35 AM
 #45

Heh.  I'm still waiting for the bitcoin project to get honest and state that not all 'peers' are 'equal peers' in the 'p2p' network.  Somehow it seems not to be a priority.  Funny that.

I think it is more simple than that.  If you aren't a full node you aren't a peer. Period.  All peers are equal but not all users are peers.  One way to look at it is inter bank networks are a form of peer to peer networking (where access to the network is limited to a selected few trusted peers).  If you send an ACH or Bank Wire you are using a peer to peer network but YOU aren't one of the peers.  The sending and receiving banks (and any interim banks) are the peers. 

I think a similar thing will happen with Bitcoin with one exception.   It doesn't matter what computing power you have available or are willing to acquire.  The banking p2p network is a good ole boys club, peons not invited.  With Bitcoin you at least have the CHOICE of being a peer.  In the long run (and this would apply to other crypto-currencies as well) a large number, possibly a super majority of users will not be peers.  They are willing to accept the tradeoff of reduced security for convince act become a user not a peer of the network.

TL/DR:
No such thing as less than equal peers, you are either a peer or your aren't.  In Bitcoin v0.1 100% of nodes were peers today some large x% are in time that x% will shrink.  Peers are still peer but not everyone will want or need to be a peer.  There is a real cost of being a peer and that cost (regardless of scalability improvements) is likely to rise over time.

Quote
and the fluff about blockchain pruning was either marketing BS or is de-prioritized (one suspects in order to assist in the formation of 'server-variety-peers' and shifting of non-commercial entities into the 'client-variety-peer' category.

I don't see any support for that claim.  On the contrary ...
https://bitcointalk.org/index.php?topic=252937.0

It is a non-trivial issue.  For complete security we want a large number of independent nodes maintaining a full historical copy of the blockchain.  It doesn't need to be every node but enough that there retains a decentralized hard to corrupt consensus of the canonical history of transactions.  There is a real risk in a jump to a pruned db model that information is lost or overly centralized.  It doesn't mean the problem is unsolvable however it is better to err on the side of caution. 
tvbcof
Legendary
*
Offline Offline

Activity: 4564
Merit: 1276


View Profile
July 19, 2013, 05:31:05 AM
 #46

...
TL/DR:
No such thing as less than equal peers, you are either a peer or your aren't.  In Bitcoin v0.1 100% of nodes were peers today some large x% are in time that x% will shrink.  Peers are still peer but not everyone will want or need to be a peer.  There is a real cost of being a peer and that cost (regardless of scalability improvements) is likely to rise over time.
...

I was trying to be a bit facetious in use terms like 'unequal peers' and '[server|client]-variety-peers'.  Certain of the more technical here might appreciate it, and I suspect that you are among them.

Anyway, I'm glad I err'd on the side of brevity (this time) and allowed you to make the point that the the solution we are migrating towards looks an awful lot like what we see in ACH.  How long it takes to get there (if ever) will I suspect be dictated mainly by the transaction per unit time growth.

I also suspect that you may not be thrilled about this evolution, but very well may see it as a necessary evil.  If so, I respectfully dis-agree.  In my mind it makes the solution not much good for much of anything, and that is particularly the case in light of the Snowden revelations (or 'confirmations' to some of us.)

---

Again though, I find it to be scammy and offensive to prominently label the system 'peer-2-peer' as long as there is a liklihood that it's going SPV, and changing the default recommendation to Multibit is ample evidence that that is exactly the path chosen by those calling the shots.  The main things Bitcoin has going for it are that it is 'first' and it is 'open-source'.  It is honest and appropriate to dwell on those things because they happen to be true.


sig spam anywhere and self-moderated threads on the pol&soc board are for losers.
Come-from-Beyond (OP)
Legendary
*
Offline Offline

Activity: 2142
Merit: 1009

Newbie


View Profile
July 19, 2013, 07:03:41 AM
 #47

If you are a casual user unable to keep the client online why not just use a SPV client.

I thought there wasn't any. Which one would u recommend?
drawingthesun
Legendary
*
Offline Offline

Activity: 1176
Merit: 1015


View Profile
July 19, 2013, 07:15:16 AM
 #48

bitcoin is made for criminals, it wasn't intended to grow big for mainstream transacting

satoshi said this in the early days

Can you show me where Satoshi said this?
DeathAndTaxes
Donator
Legendary
*
Offline Offline

Activity: 1218
Merit: 1063


Gerald Davis


View Profile
July 19, 2013, 07:18:32 AM
 #49

If you are a casual user unable to keep the client online why not just use a SPV client.

I thought there wasn't any. Which one would u recommend?

https://multibit.org/

It is linked to and recommended from bitcoin.org
tvbcof
Legendary
*
Offline Offline

Activity: 4564
Merit: 1276


View Profile
July 19, 2013, 07:22:00 AM
 #50

If you are a casual user unable to keep the client online why not just use a SPV client.

I thought there wasn't any. Which one would u recommend?

Multibit.  It's being promoted as the default now by bitcoin.org.  Or so it seems to me via placement on the web page (and statements on the 'sticky' thread which my very much on-topic post was deleted from since it was not good marketing material apparently.)

  http://bitcoin.org/en/choose-your-wallet

To Multibit's credit, the strings 'peer' or 'p2p' to not appear obviously anywhere on their site though they still feature front-and-center on bitcoin.org.  Again, it seems pretty scammy to me.


sig spam anywhere and self-moderated threads on the pol&soc board are for losers.
Come-from-Beyond (OP)
Legendary
*
Offline Offline

Activity: 2142
Merit: 1009

Newbie


View Profile
July 19, 2013, 07:37:48 AM
 #51

If you are a casual user unable to keep the client online why not just use a SPV client.

I thought there wasn't any. Which one would u recommend?

https://multibit.org/

It is linked to and recommended from bitcoin.org

Thx, I'll learn about it to make sure I'm not supposed to trust it to be able to use it.
Anon136
Legendary
*
Offline Offline

Activity: 1722
Merit: 1217



View Profile
July 19, 2013, 03:45:28 PM
 #52

ya i already knew all that Grin. i was just wondering how you thought the bandwidth bottleneck problem would be dealt with.

My guess is a lot depends on how much Bitcoin grows and how quickly.  Also bandwidth is less of an issue unless the developers decide to go to an unlimited block size in the near future.  Even a 5MB block cap would be fairly manageable. 

With the protocol now lets assume a well connected miner needs to transfer a block to peers in 3 seconds to remain competitive.  Say the average miner (with node on a hosted server) has 100 Mbps upload bandwidth and needs to send the block to 20 peers. (100 * 3) / (8 * 20) = 1.875 MB so we probably are fine "as is" up to a 2MB.  With avg tx being 250 bytes that carries us through to 10 to 15 tps (2*1024^2 / 250)

PayPal is roughly 100 tps and using bandwidth in the current inefficient manner would require an excessive amount of bandwidth.  Currently miners broadcast the transactions as part of the block but it isn't necessary, as it is likely peers already have the transaction.  Miners can increase the hit rate by broadcasting tx in the block to peers while the tx is being worked on).  If a peer already knows of the tx then for a block they just need the header (trivial bandwidth) and the list of transaction hashes.  A soft fork to the protocol could be made which allows the broadcasting of just header and tx hash list. If we assume the average tx is 250 bytes and the hash is 32 bytes this means a >80% reduction in bandwidth required during the block transmission window (assumed 3 seconds to remain competitive without excessive orphans).  

Note this doesn't eliminate the bandwidth necessary to relay tx but it makes more efficient use of bandwidth.  Rather than a giant spike in required bandwidth for 3-5 seconds every 600 sec and underutilized bandwidth the other 595 seconds it would even out the spikes getting more accomplished without higher latency.  At 100 tps a block would on average have 60,000 tx.  At 32 bytes each broadcast over 3 seconds to 20 peers would require ~100Mbps.  An almost 8x improvement in miner throughput without increasing latency or peak bandwidth.

For existing non-mining nodes it would be trivial to keep up.  Lets assume the average node relays a tx to 4 of their 8 peers. Nodes could use improved relay logic to check if a peer needs a block before relaying.   To keep up a node just needs to handle the tps plus the overhead of blocks without falling behind (i.e. one 60,000 block in 600 seconds).  Even with only 1Mbps upload it should be possible to keep up [ (100)*(250+32)*(Cool*(4) / 1024^2 < 1.0 ].

Now bootstrapping new nodes is a greater challenge.  The block headers are trivial (~4 MB per year) but it all depends on how big blocks are and how far back non-archive nodes will wan't/need to go.  The higher the tps relative to average node's upload bandwidth the longer it will take to boot strap a node to a given depth.

so even with an unlimited block size there would still be a market for transaction inclusion in blocks since miners who attempted to relay a block that was too large would find it orphaned. that's important for network security.

also correct me if im wrong but individual miners wouldnt even need a 100mb connection would they? just the pools.

100 tps is way plenty, even if we assume the load of all credit card companies combined 100 tps would be enough to allow anyone who wanted to do an on-chain transaction to be able to afford it (excluding micro transactions but who cares about that). which is all that matters, we dont need a system where every transaction avoids all counter party risk, what we need is a system where avoiding counter party risk is affordable. 100tps would provide that.

this post put my mind at ease. i mean im already pretty significantly invested in bitcoin because even if there was no solution the the scalability problem bitcoin would still have great utility, its nice to know however that there are solutions.

Rep Thread: https://bitcointalk.org/index.php?topic=381041
If one can not confer upon another a right which he does not himself first possess, by what means does the state derive the right to engage in behaviors from which the public is prohibited?
DeathAndTaxes
Donator
Legendary
*
Offline Offline

Activity: 1218
Merit: 1063


Gerald Davis


View Profile
July 19, 2013, 04:29:28 PM
Last edit: July 19, 2013, 11:03:44 PM by DeathAndTaxes
 #53

Quote
also correct me if im wrong but individual miners wouldnt even need a 100mb connection would they? just the pools.
Correct.  In this context only the entity actually building and broadcasting a block are "miners". So yes the pool server, solo miners (ASICMiner), and setups like p2pool.  Basically if you are broadcasting the block yourself via bitcoind then you are a "miner".   IMHO "pool workers" aren't really miners and calling them that is inaccurate.  They are just computing power providers (CPPs). Smiley I actually coined that term in a request to FinCEN for an administrative ruling to highlight the distinction between the entity creating new blocks/coins and entities merely providing the resources.  We wouldn't call the power company or ISPs "miners" although electrical power and connectivity are required inputs to creating new coins/blocks.

so even with an unlimited block size there would still be a market for transaction inclusion in blocks since miners who attempted to relay a block that was too large would find it orphaned. that's important for network security.

Yes however the risk is centralization.  It isn't that the network "couldn't" handle unlimited blocks it is that we might not like the consequence of unlimited blocks. As the block sizes gets larger and larger it becomes more difficult to keep orphans to a manageable level.  Orphans directly affect profitability and for pools it is a double hit.  High orphans mean less gross revenue per unit of hashing power but it also means the pool is less competitive so miners move to another pool.  The pools revenue per unit of hashing power is reduced and the overall hashing power is reduced too.  So pools have a very large incentive to manage orphans.    Now it is important to remember it doesn't matter how long it takes for ALL nodes to get your block just how long it takes for a majority of miners to get your block.  The average connect between major miners is what matters.  

If the average connection can handle the average block then it is a non-issue.  However imagine if it can't, and orphan rates go up across the board.  Pools are incentive to reduce orphans so imagine if x pools/major solo miners (enough to make up say 60% of total hashing power) moved all their servers to the same datacenter (or for redundancy the same mirrored sets of datacenters around the world).  Those pools would have essentially unlimited free bandwidth at line speed (i.e. 1Gbps for cheap and 10Gbps for reaosnable cost).  The communication between pools wouldn't be over the open (slow) internet but near instaneous on a private network with boatloads of excessive bandwidth.  This is very simple to accomplish if the miners are in the same datacenter.  The miners just share one LAN connection on a private switch to communicate directly.   Now for these pools a 40MB, 400MB, or even 4000MB block is a non issue.  They can relay it to each other, verify, and start on the next block in a fraction of a second.  Near 0% orphan rates and reduced bandwidth costs.  For other miners however the burden of these large blocks means very high oprhan rates.  How long do you think it will take before CPPs abandon their pool with 5% orphan rates for ones with near zero.  That isn't good for decentralization of the network.

I don't want to sound doomsday but this why BANDWIDTH (not stupid disk space) is the critical resource and one which requires some careful thought when raising the block limit. It is important that average block size not exceed what a miner on an average connection can broadcast to peers in a reasonable amount of time (3 seconds = ~0.5% orphan rate) on an average public internet connection.  Granted those are pretty vague terms.  Obviously 1MB block is below that critical level and 1GB block is obviously above it.  Is 100MB fine today? How about in 10 years? Is 5MB fine today? How about 2MB and doubling every two years? I am not saying I have the answers but that is the kind of thing we (community at large not just Gavin et all) need to think about critically before saying "yeah lets go to unlimited blocks and let the market figure it out".  I have no doubt the market will figure it out however one might not like what end state it reaches.

The good news is bandwidth is still increasing rapidly and the cost per unit of data is falling just as rapidly.  This is true both at the last mile (residential connection) and in datacenters.  So it is an problem which is manageable as long as average block size doesn't eclipse that growth.

Quote
100 tps is way plenty, even if we assume the load of all credit card companies combined 100 tps would be enough to allow anyone who wanted to do an on-chain transaction to be able to afford it (excluding micro transactions but who cares about that). which is all that matters, we dont need a system where every transaction avoids all counter party risk, what we need is a system where avoiding counter party risk is affordable. 100tps would provide that.  this post put my mind at ease. i mean im already pretty significantly invested in bitcoin because even if there was no solution the the scalability problem bitcoin would still have great utility, its nice to know however that there are solutions.

Don't take any of this as "set in stone" it is more like when they ask you "how many windows are there in New York City?" in an interview.  Nobody cares what the exact number, what the interviewer is looking for is what logic will you use to come up with an answer.  If someone thinks my logic is flawed (and it certainly might be) well that is fine and I would love to hear it.  If someone can convince me otherwise that is even better.  However show me some contrary logic. If the counterargument is merely "unlimited or it is censorship" well that doesn't really get us anywhere.

There are four different bandwidth bottlenecks.

Miners
Miners are somewhat unique in that they have to broadcast a block very quickly (say 3 second or less target) to avoid excessive orphans and the loss of revenue that comes with it.  This means their bandwidth requirements are "peaky".  That can be smoothed out somewhat with protocol optimizations however I would expect to see miners run into a problem first.  The good news is it is not that difficult for pools or even solo-miners to setup their bitcoind node in a datacenter where bandwidth is more available and at lower cost.

Non mining full nodes
Full nodes don't need to receive blocks within seconds.  The positive is that as long as they receive them in a reasonable amount of time they can function.  The negative is that unless we start moving to split wallets these nodes are likely on residential connections which have limited upstream bandwidth.  A split wallet is where you have a private bitcoind running in a datacenter and your local wallet has no knowledge of the bitcoind network, it just communicates securely with your private bitcoind.  An example of this today would be electrum client connecting to your own private secure electrum server.

Bootstrapping nodes
Another issue to consider is that if full nodes are close to peak utilization you can't bootstrap new nodes.  Imagine a user has a 10 Mbps connection but the transaction volume is 9 Mbps.  The blockchain is growing at 9Mbps per second so the user is only "gaining" on the end of chain at 1 Mbps.  If the blockchain is say 30 GB it will take not ~7 hours (10Mbps) but 70 hours to catch up.
The good news here is there is some slack because most residential connections have more downstream bandwidth then upstream bandwidth and for synced nodes the upstream bandwidth is the critical resource.  

SPV nodes
The bandwidth requirements for SPV nodes are negligible and unlikely to be an issue which is a good thing however SPV don't provide for network security.  While SPV are important so casual users are not hit with the rising costs of running a full node at the same time we want to ensure that the ability to run a full node for enthusiasts remains an realistic option.  Maybe not everyone can run a full node but it shouldn't be out the reach of a the majority of potential users (i.e. requires 200Mbps symetric low latency connectivity, 20TB of storage, enterprise grade RAID controller, 64GB or RAM, and quad xeon processors).  How many full nodes are needed.  Well more is always better, there is no scenario where more hurts us in any way so it is more likely how few can we "get away with" and stay above that number.  If 100 enough?  Probably not.  1,000? 10,000? 100,000?  10% of users, 1% of users?  I don't have the answer it is just something to think about.  Higher requirements for full nodes means less full nodes but more on chain trust-less transaction volume.  It is a tradeoff, a compromise and there is no perfect answer.  1GB blocks are non-optimal in that they favor volume over decentralization too much.  Staying at 1MB blocks forever is non-optimal in that it favors decentralization over volume too much.  The optimal point is probably somewhere in the middle and the middle will move with technology.  We don't need to get the exact optimal point, there likely is a large range which "works" the goal should be to keep it down the middle of the lane.
Anon136
Legendary
*
Offline Offline

Activity: 1722
Merit: 1217



View Profile
July 19, 2013, 08:33:11 PM
 #54

good information thanks.

do you think gaven has the leverage/influance/power to remove the block size limit. i dont think he does.

Rep Thread: https://bitcointalk.org/index.php?topic=381041
If one can not confer upon another a right which he does not himself first possess, by what means does the state derive the right to engage in behaviors from which the public is prohibited?
bytemaster
Hero Member
*****
Offline Offline

Activity: 770
Merit: 566

fractally


View Profile WWW
July 19, 2013, 10:49:25 PM
 #55

Bandwidth is more critical than disk space for decentralization. 

Parallel chains of with a fixed limit on bandwidth per chain would be nice.

The ability to move value between chains would also be nice.

To move value between chains means a 3rd chain is required that is merge-mined with the other 2 chains.  This 3rd chain is responsible for confirming movements from one chain to the other and vice versa.   The cross-chain would have to be very low bandwidth, say 1% of bandwidth allocated for the main chains. 

With this approach you could have decentralization while still having only one crypto-currency.


https://fractally.com - the next generation of decentralized autonomous organizations (DAOs).
Anon136
Legendary
*
Offline Offline

Activity: 1722
Merit: 1217



View Profile
July 19, 2013, 11:00:38 PM
 #56

Bandwidth is more critical than disk space for decentralization. 

Parallel chains of with a fixed limit on bandwidth per chain would be nice.

The ability to move value between chains would also be nice.

To move value between chains means a 3rd chain is required that is merge-mined with the other 2 chains.  This 3rd chain is responsible for confirming movements from one chain to the other and vice versa.   The cross-chain would have to be very low bandwidth, say 1% of bandwidth allocated for the main chains. 

With this approach you could have decentralization while still having only one crypto-currency.



this is the same solution i came up with when i first started thinking about this issue.

Rep Thread: https://bitcointalk.org/index.php?topic=381041
If one can not confer upon another a right which he does not himself first possess, by what means does the state derive the right to engage in behaviors from which the public is prohibited?
justusranvier
Legendary
*
Offline Offline

Activity: 1400
Merit: 1006



View Profile
July 20, 2013, 12:56:08 AM
 #57

Assume there exists a demand for cryptocurrency-demoninated transactions. This demand will require a certain amount of bandwidth to satisfy.

Suppose the demand is high enough that the entire cryptocurrency ecosystem requires 10 Gbit average bandwidth.

How much does it matter if this 10 Gbit/sec global transaction demand is satisfied by 100 cryptocurrencies or 1 cryptocurrency?

Other factors to consider:

Would the average person prefer to manage a balance of 100 different cryptocurrencies, or would they prefer to hold their savings in a single currency that works everywhere? If you're having trouble figuring this one out, consider whether the average Internet user prefers to have a single global networking standard that makes all resources accessible from any ISP, or if they would prefer to go back to the 1990s walled garden days of AOL, Genie, Compuserve, and other non-interoperable services.

What does the n2 scaling property of the network effect imply for the value of a single network that can handle all 10 Gbit/sec of transactions itself vs 100 networks that can handle 100 mbit/sec each?
bytemaster
Hero Member
*****
Offline Offline

Activity: 770
Merit: 566

fractally


View Profile WWW
July 20, 2013, 01:13:15 AM
 #58

Assume there exists a demand for cryptocurrency-demoninated transactions. This demand will require a certain amount of bandwidth to satisfy.

Suppose the demand is high enough that the entire cryptocurrency ecosystem requires 10 Gbit average bandwidth.

How much does it matter if this 10 Gbit/sec global transaction demand is satisfied by 100 cryptocurrencies or 1 cryptocurrency?

Other factors to consider:

Would the average person prefer to manage a balance of 100 different cryptocurrencies, or would they prefer to hold their savings in a single currency that works everywhere? If you're having trouble figuring this one out, consider whether the average Internet user prefers to have a single global networking standard that makes all resources accessible from any ISP, or if they would prefer to go back to the 1990s walled garden days of AOL, Genie, Compuserve, and other non-interoperable services.

What does the n2 scaling property of the network effect imply for the value of a single network that can handle all 10 Gbit/sec of transactions itself vs 100 networks that can handle 100 mbit/sec each?

The network effect has to be balanced with the centralization effect.  What you want is a single currency (for pricing purposes) that scales across many banks (chains) for decentralization purposes.   

You would end up with a situation where transacting within a single chain is almost free (like transacting with a single bank) but transacting between different chains is more expensive (like a wire transfer).    If you want to send someone money you have to know both their bank and account number.   

Of course, because private keys are good on all chains you can send them money on your chain and leave it up to them to move the funds to their normal chain.   Large centralized wallets would be able to integrate all of the chains into one 'account' to give the appearance of a single large chain while still allowing individual users to with 1 Mbit internet connections to participate.

Assuming 10,000 trx / sec at 1024 bytes / trx  it would require an 80 Mbit connection to handle all of the transaction traffic which means that 256 chains could probably handle the entire transaction volume of VISA / Master Card / PayPal and all wire transfers combined, and yet you would only have one chain per state on average.    Of course, a lot of this transaction volume will probably still flow through trusted 3rd parties that enable 'instant' transfers rather than 3+ confirmation transfers.






https://fractally.com - the next generation of decentralized autonomous organizations (DAOs).
Zangelbert Bingledack
Legendary
*
Offline Offline

Activity: 1036
Merit: 1000


View Profile
July 20, 2013, 04:42:01 AM
 #59

It is important that average block size not exceed what a miner on an average connection can broadcast to peers in a reasonable amount of time (3 seconds = ~0.5% orphan rate) on an average public internet connection.  Granted those are pretty vague terms.  Obviously 1MB block is below that critical level and 1GB block is obviously above it.  Is 100MB fine today? How about in 10 years? Is 5MB fine today? How about 2MB and doubling every two years? I am not saying I have the answers but that is the kind of thing we (community at large not just Gavin et all) need to think about critically before saying "yeah lets go to unlimited blocks and let the market figure it out".  I have no doubt the market will figure it out however one might not like what end state it reaches.

What makes you think the market won't take care of the average miner, such as by limiting blocksize normatively? Any claim that the market won't take care of something should be justified by specifying how you think the market is broken for that function - because the norm is for markets to work, even if we can't immediately see how.
bytemaster
Hero Member
*****
Offline Offline

Activity: 770
Merit: 566

fractally


View Profile WWW
July 20, 2013, 04:46:06 AM
 #60

It is important that average block size not exceed what a miner on an average connection can broadcast to peers in a reasonable amount of time (3 seconds = ~0.5% orphan rate) on an average public internet connection.  Granted those are pretty vague terms.  Obviously 1MB block is below that critical level and 1GB block is obviously above it.  Is 100MB fine today? How about in 10 years? Is 5MB fine today? How about 2MB and doubling every two years? I am not saying I have the answers but that is the kind of thing we (community at large not just Gavin et all) need to think about critically before saying "yeah lets go to unlimited blocks and let the market figure it out".  I have no doubt the market will figure it out however one might not like what end state it reaches.

What makes you think the market won't take care of the average miner, such as by limiting blocksize normatively? Any claim that the market won't take care of something should be justified by specifying how you think the market is broken for that function - because the norm is for markets to work, even if we can't immediately see how.

Considering I am part of this market and am building these kinds of solutions, your claim is 100% right on about the market sorting it out.

https://fractally.com - the next generation of decentralized autonomous organizations (DAOs).
Pages: « 1 2 [3] 4 5 6 7 8 »  All
  Print  
 
Jump to:  

Powered by MySQL Powered by PHP Powered by SMF 1.1.19 | SMF © 2006-2009, Simple Machines Valid XHTML 1.0! Valid CSS!