Bitcoin Forum

Bitcoin => Bitcoin Discussion => Topic started by: Come-from-Beyond on July 03, 2013, 02:29:30 PM



Title: Once again, what about the scalability issue?
Post by: Come-from-Beyond on July 03, 2013, 02:29:30 PM
It seems to me Bitcoin core devs prefer ostrich policy. The blockchain keeps growing, pruning is not implemented yet (is it possible btw?), Gavin spoke about everything except the scalability issue on Bitcoin 2013 conference...
Is there any progress? Or is the game over?


Title: Re: Once again, what about the scalability issue?
Post by: TippingPoint on July 03, 2013, 02:38:02 PM
A reasonable question.

The wiki says " In Satoshi's paper he describes "pruning", a way to delete unnecessary data about transactions that are fully spent... As of October 2012 (block 203258) there have been 7,979,231 transactions, however the size of the unspent output set is less than 100MiB, which is small enough to easily fit in RAM for even quite old computers."

I would like to read the Satoshi pruning description, and here it is:

7. Reclaiming Disk Space
Once the latest transaction in a coin is buried under enough blocks, the spent transactions before
it can be discarded to save disk space. To facilitate this without breaking the block's hash,
transactions are hashed in a Merkle Tree, with only the root included in the block's hash.
Old blocks can then be compacted by stubbing off branches of the tree. The interior hashes do
not need to be stored.
A block header with no transactions would be about 80 bytes. If we suppose blocks are
generated every 10 minutes, 80 bytes * 6 * 24 * 365 = 4.2MB per year. With computer systems
typically selling with 2GB of RAM as of 2008, and Moore's Law predicting current growth of
1.2GB per year, storage should not be a problem even if the block headers must be kept in
memory.


Title: Re: Once again, what about the scalability issue?
Post by: asically on July 03, 2013, 02:43:25 PM
What about Simplfied Payment Verification implementation?


Title: Re: Once again, what about the scalability issue?
Post by: Come-from-Beyond on July 04, 2013, 09:14:29 PM
Blockchain size - 8.09 GB


Title: Re: Once again, what about the scalability issue?
Post by: porcupine87 on July 04, 2013, 11:36:06 PM
Blockchain size - 8.09 GB

Question. How can it be, that everyone has another size for the block chain. I read here sometimes about 11Gig, now you have 8,09. I have 9,79? From which factors the size it dependend?


Title: Re: Once again, what about the scalability issue?
Post by: readonlyaccess on July 05, 2013, 12:16:24 AM
Blockchain size - 8.09 GB

Question. How can it be, that everyone has another size for the block chain. I read here sometimes about 11Gig, now you have 8,09. I have 9,79? From which factors the size it dependend?

http://blockchain.info/charts/blocks-size


Title: Re: Once again, what about the scalability issue?
Post by: wolverine.ks on July 05, 2013, 12:26:38 AM
maybe im wrong, but i have yet to hear of someone having a practical problem with the blockchain size. i hear a lot of gloom and doom, but never examples of something that someone tried to do but was unable to do because of the size.

additionally, it seems that there are already people making work arounds for what they believe to be limitations in the protocol, and they are making money off of it.

so its always a good idea to keep your eye on the future, but this seems like a fear regarding the free market's ability to cope with obstacles more than a fear that bitcoin will someday break.


Title: Re: Once again, what about the scalability issue?
Post by: d'aniel on July 05, 2013, 12:47:29 AM
It seems to me Bitcoin core devs prefer ostrich policy. The blockchain keeps growing, pruning is not implemented yet (is it possible btw?), Gavin spoke about everything except the scalability issue on Bitcoin 2013 conference...
Is there any progress? Or is the game over?
That's nice, you've completely ignored all the recent work Peter Wuille has done with ultraprune, which sets the stage for pruning the currently 8GB blockchain that takes up a whopping 1.6% of my laptop's hard disk (at this rate it doesn't matter if it takes him another year or two to fully implement pruning).  Not to mention his fast signature checking implementation.

Gavin's recent payment protocol work is equally important, and maybe he isn't personally working on these things simply because Peter already is.

Welcome to my ignore list you lousy ingrate.


Title: Re: Once again, what about the scalability issue?
Post by: d'aniel on July 05, 2013, 01:10:57 AM
bitcoin has never synced up on my computer and now i know why it's too big and buggy




NYC;)




this Peter Wuille ? lol :



http://www.youtube.com/watch?v=LSNn4HEDYWs
I've synced from scratch almost a dozen times over the past few years without any trouble.

You get to be on my ignore list too.


Title: Re: Once again, what about the scalability issue?
Post by: Cyberdyne on July 05, 2013, 01:55:40 AM
I just bought a 3 TB hard drive for cheap.

Next year I might buy a 4 TB for cheap.

Ostrich policy suits me fine.


Title: Re: Once again, what about the scalability issue?
Post by: Come-from-Beyond on July 05, 2013, 07:35:23 AM
Question. How can it be, that everyone has another size for the block chain. I read here sometimes about 11Gig, now you have 8,09. I have 9,79? From which factors the size it dependend?

Blockchain.Info says 8280 MB, it's 8.09 GB.



maybe im wrong, but i have yet to hear of someone having a practical problem with the blockchain size.

Try to download the blockchain on a new computer.



That's nice, you've completely ignored all the recent work Peter Wuille has done with ultraprune, which sets the stage for pruning the currently 8GB blockchain...

Is there any result?


Welcome to my ignore list you lousy ingrate.

Noone cares about ur ignore list.



I just bought a 3 TB hard drive for cheap.

Next year I might buy a 4 TB for cheap.

Ostrich policy suits me fine.

Ok, but forget about world-wide adoption and 1 BTC for 1000$ then.


Title: Re: Once again, what about the scalability issue?
Post by: wopwop on July 05, 2013, 08:14:04 AM
bitcoin is made for criminals, it wasn't intended to grow big for mainstream transacting

satoshi said this in the early days


Title: Re: Once again, what about the scalability issue?
Post by: 🏰 TradeFortress 🏰 on July 05, 2013, 09:45:55 AM
Offchain transactions allows Bitcoin to scale. Sure, it has it's own drawbacks too, like how it requires trust, but it is still a solution.


Title: Re: Once again, what about the scalability issue?
Post by: porcupine87 on July 05, 2013, 10:12:41 AM
Question. How can it be, that everyone has another size for the block chain. I read here sometimes about 11Gig, now you have 8,09. I have 9,79? From which factors the size it dependend?

Blockchain.Info says 8280 MB, it's 8.09 GB.
Hm cool, but on my harddrive the chain requires 9,81 Gigabyte. This is the size of my folder "blocks". So how can that be?


Title: Re: Once again, what about the scalability issue?
Post by: xavier on July 05, 2013, 10:17:15 AM
Yes this is the number 1 issue with bitcoin

No it hasn't been solved

All posts about SPV can be ignored; the idea of SPV is fundamentally flawed

Right now bitcoin remains unscalable, this issue still hasn't been solved

As a side note, just because a developer is well known and established in the community, it doesnt mean everything he says is correct. The only proven genius behind bitcoin is Satoshi, who created it, and he left the project long ago.

Yes what Im saying is controversial. I've been saying the same thing for months now. Anyway, deal with it.


Title: Re: Once again, what about the scalability issue?
Post by: Come-from-Beyond on July 05, 2013, 11:08:32 AM
Blockchain.Info says 8280 MB, it's 8.09 GB.
Hm cool, but on my harddrive the chain requires 9,81 Gigabyte. This is the size of my folder "blocks". So how can that be?

Internal structures of ur wallet software add some overhead.


Title: Re: Once again, what about the scalability issue?
Post by: Mike Hearn on July 05, 2013, 12:02:34 PM
It's expected that local disk usage measurements will vary, due to whether you include the leveldb sizes in your amount or not, and how many orphaned blocks you have.



Title: Re: Once again, what about the scalability issue?
Post by: Suushi on July 05, 2013, 02:33:07 PM
http://screencast.com/t/SzZrgmWed1ZO

Here's my folder size.. weird


Title: Re: Once again, what about the scalability issue?
Post by: warpio on July 05, 2013, 02:54:26 PM
It will be a great milestone once we are able to run full nodes without having to worry about the growing size of the full blockchain.

There's still plenty of time for this to be implemented. I'm not worried. Until then, people who don't want to download the full blockchain can rely on the 3rd party nodes/exchange services that we have now.


Title: Re: Once again, what about the scalability issue?
Post by: justusranvier on July 05, 2013, 04:35:54 PM
Once again, people are working on scalability. Donate if you really care about the problem and want to help:

http://utxo.tumblr.com/ (http://utxo.tumblr.com/)


Title: Re: Once again, what about the scalability issue?
Post by: Come-from-Beyond on July 08, 2013, 06:17:15 AM
Blockchain size - 8.14 GB


Title: Re: Once again, what about the scalability issue?
Post by: Come-from-Beyond on July 08, 2013, 09:28:57 AM
Blockchain size - 8.14 GB

Wait the blockchain is still growing :O I thought the removing of dust was going to solve all scalability problems :( I guess Gavin was wrong.

/sarcasm

Removing of dust was done to earn a few years before majority sees that Bitcoin has a lot of scalability issues.

/imho


Title: Re: Once again, what about the scalability issue?
Post by: nwbitcoin on July 08, 2013, 11:17:16 AM
This is a non issue - until we get some real big transaction volume and that isn't going to happen for a number of years.

The original white paper mentioned the number of visa transactions in a day as a guide, and the infrastructure of bitcoin allows for it to be managed with no real problems.  However, what isn't being added to the mix is Moore's Law which is also going to help the whole chain be managed without serious pruning.

In general, there is nothing to see here! ;)



Title: Re: Once again, what about the scalability issue?
Post by: Come-from-Beyond on July 08, 2013, 11:53:41 AM
This is a non issue - until we get some real big transaction volume and that isn't going to happen for a number of years.

We won't get some real big transaction volume because of this issue.


Title: Re: Once again, what about the scalability issue?
Post by: Come-from-Beyond on July 17, 2013, 06:48:36 PM
Blockchain size - 8.3 GB


Title: Re: Once again, what about the scalability issue?
Post by: DeathAndTaxes on July 18, 2013, 12:05:48 AM
Blockchain size - 8.3 GB

UXTO ~0.24 GB and growing linearly by about 0.1 GB per year.

Seeing as my workstation has 16GB of RAM and 3TB of storage I will probably need to upgrade my system by the year 2130.  I put it on my google calender.


Title: Re: Once again, what about the scalability issue?
Post by: mcdett on July 18, 2013, 12:41:20 AM
We won't get some real big transaction volume because of this issue.

It will just force the economies around the system to change.  Not everyone will be able to maintain a real block chain.  We need to work on trust issues with relying on 3rd parties to verify transactions for us....

this doesn't slow the machine down, just causes change.


Title: Re: Once again, what about the scalability issue?
Post by: DeathAndTaxes on July 18, 2013, 01:10:33 AM
We won't get some real big transaction volume because of this issue.

It will just force the economies around the system to change.  Not everyone will be able to maintain a real block chain.  We need to work on trust issues with relying on 3rd parties to verify transactions for us....

this doesn't slow the machine down, just causes change.

WOW so we should just give up and forgot about the core of bitcoin. We should just turn over and die I guess. I see another person that drinks the core dev team juice. The blockchain needs to be reworked to fix a very simple problem with the need for a complex solution.

Satoshi believed from day 1 that not every user would maintain a full node.  That is why his paper includes a section on SPV.  Decentralized doesn't have to mean every single human on the planet is an equal peer in a network covering all transactions for the human race.   tens of thousands or hundreds of thousands of nodes in a network used by millions or tens of millions provides sufficient decentralization that attacks to limit or exploit the network becomes infeasible.


Title: Re: Once again, what about the scalability issue?
Post by: gweedo on July 18, 2013, 01:13:39 AM
We won't get some real big transaction volume because of this issue.

It will just force the economies around the system to change.  Not everyone will be able to maintain a real block chain.  We need to work on trust issues with relying on 3rd parties to verify transactions for us....

this doesn't slow the machine down, just causes change.

WOW so we should just give up and forgot about the core of bitcoin. We should just turn over and die I guess. I see another person that drinks the core dev team juice. The blockchain needs to be reworked to fix a very simple problem with the need for a complex solution.

Satoshi believed from day 1 that not every user would maintain a full node.  That is why his paper includes a section on SPV.

Their a huge difference between a 3rd party server and SPV clients. Yes one day, when it takes 100's of GBs and their is no more optimizations that can be done.


Title: Re: Once again, what about the scalability issue?
Post by: DeathAndTaxes on July 18, 2013, 02:45:31 AM
What 3rd party server.  I run my own node and will be able to for at least a century at this rate in blockchain growth.


Title: Re: Once again, what about the scalability issue?
Post by: calian on July 18, 2013, 03:58:50 AM
Are any miners considering allowing people to sweep their 0.00000001 amounts to addresses with larger amounts fee-free? Though I guess if there's only 100 MB of addresses with positive balances no one might care.


Title: Re: Once again, what about the scalability issue?
Post by: Come-from-Beyond on July 18, 2013, 04:47:45 AM
Yesterday I spent a whole hour downloading blocks for last 4 weeks. Not very convenient! Seems my 1 TB drive didn't help much.

What am I doing wrong?


Title: Re: Once again, what about the scalability issue?
Post by: Cyberdyne on July 18, 2013, 09:29:23 AM
Yesterday I spent a whole hour downloading blocks for last 4 weeks. Not very convenient! Seems my 1 TB drive didn't help much.

What am I doing wrong?

Leaving it offline too long?


Title: Re: Once again, what about the scalability issue?
Post by: Come-from-Beyond on July 18, 2013, 05:15:25 PM
Leaving it offline too long?

Aye, I'm a non-hardcore casual bitcoiner. But that was an example of an issue related to slow downloading/uploading speed. Freshly mined blocks can't be pruned.


Title: Re: Once again, what about the scalability issue?
Post by: DeathAndTaxes on July 18, 2013, 08:09:46 PM
Leaving it offline too long?

Aye, I'm a non-hardcore casual bitcoiner. But that was an example of an issue related to slow downloading/uploading speed. Freshly mined blocks can't be pruned.

If you are a casual user unable to keep the client online why not just use a SPV client.  You aren't contributing to the decentralization of the network if your node has an uptime of ~3%.


Title: Re: Once again, what about the scalability issue?
Post by: Syke on July 18, 2013, 10:40:45 PM
The blockchain on my phone is 1.06 MB. I think the blockchain is just fine in size.


Title: Re: Once again, what about the scalability issue?
Post by: Killdozer on July 18, 2013, 10:43:46 PM
People, stop saying that scalability is not a problem and writing about how cheap hard drives are.
The scalability is the number one problem stopping Bitcoin from becoming mainstream.
It doesn't matter how fast the drives are growing, right now the blockchain keeps all the old information which is not even needed, and grows indefinitely, how hard is it to understand that it is a non-scalable non-future-friendly scheme?
I am sure the devs know this and are doing their best to address it and I am grateful for that. But saying that it's not a problem is just ignorant and stupid.

Quote
We won't get some real big transaction volume because of this issue.
I can't see how anybody is even arguing against this. I mean, it's even in the wiki: https://en.bitcoin.it/wiki/Scalability


Title: Re: Once again, what about the scalability issue?
Post by: Anon136 on July 18, 2013, 10:57:09 PM
Once again, people are working on scalability. Donate if you really care about the problem and want to help:

http://utxo.tumblr.com/

so is the idea here to just to expand the max block size for miners when ever we hit a wall and make it so that non-mining nodes dont need that level of bandwidth to audit transactions?

sorry i have put a lot of work into understanding bitcoin in the abstract but im not computer scientist. a lot of the technical minutia goes over my head, especially with proposed alterations to bitcoin when all my effort has been put towards understanding bitcoin as it stands.


Title: Re: Once again, what about the scalability issue?
Post by: DeathAndTaxes on July 18, 2013, 11:09:29 PM
People, stop saying that scalability is not a problem and writing about how cheap hard drives are.
The scalability is the number one problem stopping Bitcoin from becoming mainstream.
It doesn't matter how fast the drives are growing, right now the blockchain keeps all the old information which is not even needed, and grows indefinitely, how hard is it to understand that it is a non-scalable non-future-friendly scheme?
I am sure the devs know this and are doing their best to address it and I am grateful for that. But saying that it's not a problem is just ignorant and stupid.

Quote
We won't get some real big transaction volume because of this issue.
I can't see how anybody is even arguing against this. I mean, it's even in the wiki: https://en.bitcoin.it/wiki/Scalability

The historical storage is a non-issue and the scalability page points that out.  Bandwidth (for CURRENT blocks) presents a much harder bottleneck to extreme transaction levels and after bandwidth comes memory as fast validation requires the UXTO to be cached in memory.  Thankfully dust rules will constrain the growth of the UXTO however both bandwidth and memory will be an issue much sooner and quicker than the storing the blockchain on disk. 

The the idea that today's transaction volume is held back because of the "massive" blockchain isn't supported by the facts.  Even the 1MB block limit provides for 7 tps and the current network isn't even 0.5 tps sustained.  We could see a 1,300% increase in transaction volume before even the 1MB limit became an issue.  At 1 MB per block the blockchain would grow by 50 GB per year.  It would take 20 years of maxed out 1MB blocks before the blockchain couldn't fit on an "ancient" (in the year 2033) 1TB drive.  

Beyond 1MB the storage requirements will grow but they will run up against memory and bandwidth long before disk space becomes too expensive.  Still as pointed out eventually most nodes will not maintain a copy of the full blockchain, that will be a task reserved for "archive nodes" and instead will just retain the block headers (which is ~4MB per year) and a deep enough section of the the recent blockchain.




Title: Re: Once again, what about the scalability issue?
Post by: Anon136 on July 18, 2013, 11:14:11 PM
People, stop saying that scalability is not a problem and writing about how cheap hard drives are.
The scalability is the number one problem stopping Bitcoin from becoming mainstream.
It doesn't matter how fast the drives are growing, right now the blockchain keeps all the old information which is not even needed, and grows indefinitely, how hard is it to understand that it is a non-scalable non-future-friendly scheme?
I am sure the devs know this and are doing their best to address it and I am grateful for that. But saying that it's not a problem is just ignorant and stupid.

Quote
We won't get some real big transaction volume because of this issue.
I can't see how anybody is even arguing against this. I mean, it's even in the wiki: https://en.bitcoin.it/wiki/Scalability

The historical storage is a non-issue and the scalability page points that out.  Bandwidth (for CURRENT blocks) presents a much harder bottleneck to extreme transaction levels and after bandwidth comes memory as fast validation requires the UXTO to be cached in memory.  Thankfully dust rules will constrain the growth of the UXTO however both bandwidth and memory will be an issue much sooner and quicker than the storing the blockchain on disk. 

The the idea that today's transaction volume is held back because of the "massive" blockchain isn't supported by the facts.  Even the 1MB block limit provides for 7 tps and the current network isn't even 0.5 tps sustained.  We could see a 1,300% increase in transaction volume before even the 1MB limit became an issue.  At 1 MB per block the blockchain would grow by 50 GB per year.  It would take 20 years of maxed out 1MB blocks before the blockchain couldn't fit on an "ancient" (in the year 2033) 1TB drive.  

Beyond 1MB the storage requirements will grow but they will run up against memory and bandwidth long before disk space becomes too expensive.  Still as pointed out eventually most nodes will not maintain a copy of the full blockchain, that will be a task reserved for "archive nodes" and instead will just retain the block headers (which is ~4MB per year) and a deep enough section of the the recent blockchain.

so as far as addressing the bandwidth bottleneck problem you are in the off chain transaction camp correct?


Title: Re: Once again, what about the scalability issue?
Post by: DeathAndTaxes on July 18, 2013, 11:29:53 PM
People, stop saying that scalability is not a problem and writing about how cheap hard drives are.
The scalability is the number one problem stopping Bitcoin from becoming mainstream.
It doesn't matter how fast the drives are growing, right now the blockchain keeps all the old information which is not even needed, and grows indefinitely, how hard is it to understand that it is a non-scalable non-future-friendly scheme?
I am sure the devs know this and are doing their best to address it and I am grateful for that. But saying that it's not a problem is just ignorant and stupid.

Quote
We won't get some real big transaction volume because of this issue.
I can't see how anybody is even arguing against this. I mean, it's even in the wiki: https://en.bitcoin.it/wiki/Scalability

The historical storage is a non-issue and the scalability page points that out.  Bandwidth (for CURRENT blocks) presents a much harder bottleneck to extreme transaction levels and after bandwidth comes memory as fast validation requires the UXTO to be cached in memory.  Thankfully dust rules will constrain the growth of the UXTO however both bandwidth and memory will be an issue much sooner and quicker than the storing the blockchain on disk.  

The the idea that today's transaction volume is held back because of the "massive" blockchain isn't supported by the facts.  Even the 1MB block limit provides for 7 tps and the current network isn't even 0.5 tps sustained.  We could see a 1,300% increase in transaction volume before even the 1MB limit became an issue.  At 1 MB per block the blockchain would grow by 50 GB per year.  It would take 20 years of maxed out 1MB blocks before the blockchain couldn't fit on an "ancient" (in the year 2033) 1TB drive.  

Beyond 1MB the storage requirements will grow but they will run up against memory and bandwidth long before disk space becomes too expensive.  Still as pointed out eventually most nodes will not maintain a copy of the full blockchain, that will be a task reserved for "archive nodes" and instead will just retain the block headers (which is ~4MB per year) and a deep enough section of the the recent blockchain.

so as far as addressing the bandwidth bottleneck problem you are in the off chain transaction camp correct?

No although I believe regardless off-chain tx will happen.  They happen right now.  Some people leave their BTC on MtGox and when they pay someone who also has a MtGox address it happens instantly, without fees, and off the blockchain.  Now imagine MtGox partners with an eWallet provider and both companies hold funds in reserve to cover transfers to each other's private books.  Suddenly you can now transfer funds

So off chain tx are going to happen regardless.

I was just pointing out between the four critical resources:
bandwidth
memory
processing power
storage

storage is so far behind the other ones that worrying about that is kinda silly.  We will hit walls in memory and banwidth at much lower tps then it would take before disk space became critical.  The good news is last mile bandwidth is still increasing (doubling every 18-24 months) however there is risk of centralization due to resources if tx volume grows beyond what the "average" node can handle.  If tx volume grows so fast that 99% of nodes simply can't maintain a full node because they lack sufficient bandwidth to keep up with the blockchain then you will see a lot of full nodes go offline and they is a risk that the network is now in the handles of a much smaller number of nodes (likely in datacenters with extreme high bandwidth links).  Since bandwidth is both the tightest bottleneck AND the one where many users have the least control over. As an example I recently paid $80 and doubled by workstation's ram to 16GB.  Lets say my workstation is viable for another 3 years.  $80/36 = ~3 per month.  Even if bitcoind today was memory constrained on 8GB systems I could bypass that bottleneck for a mere $3 a month.  I like Bitcoin, I want to see it work, I will gladly pay $3 to make sure it happens.  However I can't pay an extra $3 a month and double my upstream (and for residential connections that is the killer) bandwidth.  So hypothetically if Bitcoin wasn't memory or storage constrained by bandwidth constrained today I would be "stuck" I am either looking at much higher cost, or a need for more exotic solutions (like running my node on a server).

Yeah that was longer than I intended. 

TL/DR: Yes scalability will ALWAYS be an issue as long as tx volume is growing however storage is the least of our worries.  The point is also somewhat moot because eventually most nodes won't maintain full blocks back to the genesis block.  That will be reserved for "archive" nodes.  There likely will be fewer of them but as long as there are a sufficient number to maintain a decentralized consensus the network can be just as secure and users have a choice (full node, full headers & recent blocks, lite client) depending on their needs and risk.




Title: Re: Once again, what about the scalability issue?
Post by: Anon136 on July 19, 2013, 12:04:44 AM
People, stop saying that scalability is not a problem and writing about how cheap hard drives are.
The scalability is the number one problem stopping Bitcoin from becoming mainstream.
It doesn't matter how fast the drives are growing, right now the blockchain keeps all the old information which is not even needed, and grows indefinitely, how hard is it to understand that it is a non-scalable non-future-friendly scheme?
I am sure the devs know this and are doing their best to address it and I am grateful for that. But saying that it's not a problem is just ignorant and stupid.

Quote
We won't get some real big transaction volume because of this issue.
I can't see how anybody is even arguing against this. I mean, it's even in the wiki: https://en.bitcoin.it/wiki/Scalability

The historical storage is a non-issue and the scalability page points that out.  Bandwidth (for CURRENT blocks) presents a much harder bottleneck to extreme transaction levels and after bandwidth comes memory as fast validation requires the UXTO to be cached in memory.  Thankfully dust rules will constrain the growth of the UXTO however both bandwidth and memory will be an issue much sooner and quicker than the storing the blockchain on disk.  

The the idea that today's transaction volume is held back because of the "massive" blockchain isn't supported by the facts.  Even the 1MB block limit provides for 7 tps and the current network isn't even 0.5 tps sustained.  We could see a 1,300% increase in transaction volume before even the 1MB limit became an issue.  At 1 MB per block the blockchain would grow by 50 GB per year.  It would take 20 years of maxed out 1MB blocks before the blockchain couldn't fit on an "ancient" (in the year 2033) 1TB drive.  

Beyond 1MB the storage requirements will grow but they will run up against memory and bandwidth long before disk space becomes too expensive.  Still as pointed out eventually most nodes will not maintain a copy of the full blockchain, that will be a task reserved for "archive nodes" and instead will just retain the block headers (which is ~4MB per year) and a deep enough section of the the recent blockchain.

so as far as addressing the bandwidth bottleneck problem you are in the off chain transaction camp correct?

No although I believe regardless off-chain tx will happen.  They happen right now.  Some people leave their BTC on MtGox and when they pay someone who also has a MtGox address it happens instantly, without fees, and off the blockchain.  Now imagine MtGox partners with an eWallet provider and both companies hold funds in reserve to cover transfers to each other's private books.  Suddenly you can now transfer funds

So off chain tx are going to happen regardless.

I was just pointing out between the four critical resources:
bandwidth
memory
processing power
storage

storage is so far behind the other ones that worrying about that is kinda silly.  We will hit walls in memory and banwidth at much lower tps then it would take before disk space became critical.  The good news is last mile bandwidth is still increasing (doubling every 18-24 months) however there is risk of centralization due to resources if tx volume grows beyond what the "average" node can handle.  If tx volume grows so fast that 99% of nodes simply can't maintain a full node because they lack sufficient bandwidth to keep up with the blockchain then you will see a lot of full nodes go offline and they is a risk that the network is now in the handles of a much smaller number of nodes (likely in datacenters with extreme high bandwidth links).  Since bandwidth is both the tightest bottleneck AND the one where many users have the least control over. As an example I recently paid $80 and doubled by workstation's ram to 16GB.  Lets say my workstation is viable for another 3 years.  $80/36 = ~3 per month.  Even if bitcoind today was memory constrained on 8GB systems I could bypass that bottleneck for a mere $3 a month.  I like Bitcoin, I want to see it work, I will gladly pay $3 to make sure it happens.  However I can't pay an extra $3 a month and double my upstream (and for residential connections that is the killer) bandwidth.  So hypothetically if Bitcoin wasn't memory or storage constrained by bandwidth constrained today I would be "stuck" I am either looking at much higher cost, or a need for more exotic solutions (like running my node on a server).

Yeah that was longer than I intended. 

TL/DR: Yes scalability will ALWAYS be an issue as long as tx volume is growing however storage is the least of our worries.  The point is also somewhat moot because eventually most nodes won't maintain full blocks back to the genesis block.  That will be reserved for "archive" nodes.  There likely will be fewer of them but as long as there are a sufficient number to maintain a decentralized consensus the network can be just as secure and users have a choice (full node, full headers & recent blocks, lite client) depending on their needs and risk.




ya i already knew all that ;D. i was just wondering how you thought the bandwidth bottleneck problem would be dealt with.


Title: Re: Once again, what about the scalability issue?
Post by: DeathAndTaxes on July 19, 2013, 12:47:04 AM
ya i already knew all that ;D. i was just wondering how you thought the bandwidth bottleneck problem would be dealt with.

My guess is a lot depends on how much Bitcoin grows and how quickly.  Also bandwidth is less of an issue unless the developers decide to go to an unlimited block size in the near future.  Even a 5MB block cap would be fairly manageable. 

With the protocol now lets assume a well connected miner needs to transfer a block to peers in 3 seconds to remain competitive.  Say the average miner (with node on a hosted server) has 100 Mbps upload bandwidth and needs to send the block to 20 peers. (100 * 3) / (8 * 20) = 1.875 MB so we probably are fine "as is" up to a 2MB.  With avg tx being 250 bytes that carries us through to 10 to 15 tps (2*1024^2 / 250)

PayPal is roughly 100 tps and using bandwidth in the current inefficient manner would require an excessive amount of bandwidth.  Currently miners broadcast the transactions as part of the block but it isn't necessary, as it is likely peers already have the transaction.  Miners can increase the hit rate by broadcasting tx in the block to peers while the tx is being worked on).  If a peer already knows of the tx then for a block they just need the header (trivial bandwidth) and the list of transaction hashes.  A soft fork to the protocol could be made which allows the broadcasting of just header and tx hash list. If we assume the average tx is 250 bytes and the hash is 32 bytes this means a >80% reduction in bandwidth required during the block transmission window (assumed 3 seconds to remain competitive without excessive orphans).  

Note this doesn't eliminate the bandwidth necessary to relay tx but it makes more efficient use of bandwidth.  Rather than a giant spike in required bandwidth for 3-5 seconds every 600 sec and underutilized bandwidth the other 595 seconds it would even out the spikes getting more accomplished without higher latency.  At 100 tps a block would on average have 60,000 tx.  At 32 bytes each broadcast over 3 seconds to 20 peers would require ~100Mbps.  An almost 8x improvement in miner throughput without increasing latency or peak bandwidth.

For existing non-mining nodes it would be trivial to keep up.  Lets assume the average node relays a tx to 4 of their 8 peers. Nodes could use improved relay logic to check if a peer needs a block before relaying.   To keep up a node just needs to handle the tps plus the overhead of blocks without falling behind (i.e. one 60,000 block in 600 seconds).  Even with only 1Mbps upload it should be possible to keep up [ (100)*(250+32)*(8)*(4) / 1024^2 < 1.0 ].

Now bootstrapping new nodes is a greater challenge.  The block headers are trivial (~4 MB per year) but it all depends on how big blocks are and how far back non-archive nodes will wan't/need to go.  The higher the tps relative to average node's upload bandwidth the longer it will take to boot strap a node to a given depth.





Title: Re: Once again, what about the scalability issue?
Post by: tvbcof on July 19, 2013, 03:26:54 AM

Satoshi believed from day 1 that not every user would maintain a full node.  That is why his paper includes a section on SPV.  Decentralized doesn't have to mean every single human on the planet is an equal peer in a network covering all transactions for the human race.   tens of thousands or hundreds of thousands of nodes in a network used by millions or tens of millions provides sufficient decentralization that attacks to limit or exploit the network becomes infeasible.

Heh.  I'm still waiting for the bitcoin project to get honest and state that not all 'peers' are 'equal peers' in the 'p2p' network.  Somehow it seems not to be a priority.  Funny that.

It also would not hurt (from a perspective of truth in advertising) to stipulate that Bitcoin is 'eventually-deflationary', non-scalable, far from anonymous, and the fluff about blockchain pruning was either marketing BS or is de-prioritized (one suspects in order to assist in the formation of 'server-variety-peers' and shifting of non-commercial entities into the 'client-variety-peer' category.



Title: Re: Once again, what about the scalability issue?
Post by: DeathAndTaxes on July 19, 2013, 03:44:35 AM
Heh.  I'm still waiting for the bitcoin project to get honest and state that not all 'peers' are 'equal peers' in the 'p2p' network.  Somehow it seems not to be a priority.  Funny that.

I think it is more simple than that.  If you aren't a full node you aren't a peer. Period.  All peers are equal but not all users are peers.  One way to look at it is inter bank networks are a form of peer to peer networking (where access to the network is limited to a selected few trusted peers).  If you send an ACH or Bank Wire you are using a peer to peer network but YOU aren't one of the peers.  The sending and receiving banks (and any interim banks) are the peers. 

I think a similar thing will happen with Bitcoin with one exception.   It doesn't matter what computing power you have available or are willing to acquire.  The banking p2p network is a good ole boys club, peons not invited.  With Bitcoin you at least have the CHOICE of being a peer.  In the long run (and this would apply to other crypto-currencies as well) a large number, possibly a super majority of users will not be peers.  They are willing to accept the tradeoff of reduced security for convince act become a user not a peer of the network.

TL/DR:
No such thing as less than equal peers, you are either a peer or your aren't.  In Bitcoin v0.1 100% of nodes were peers today some large x% are in time that x% will shrink.  Peers are still peer but not everyone will want or need to be a peer.  There is a real cost of being a peer and that cost (regardless of scalability improvements) is likely to rise over time.

Quote
and the fluff about blockchain pruning was either marketing BS or is de-prioritized (one suspects in order to assist in the formation of 'server-variety-peers' and shifting of non-commercial entities into the 'client-variety-peer' category.

I don't see any support for that claim.  On the contrary ...
https://bitcointalk.org/index.php?topic=252937.0

It is a non-trivial issue.  For complete security we want a large number of independent nodes maintaining a full historical copy of the blockchain.  It doesn't need to be every node but enough that there retains a decentralized hard to corrupt consensus of the canonical history of transactions.  There is a real risk in a jump to a pruned db model that information is lost or overly centralized.  It doesn't mean the problem is unsolvable however it is better to err on the side of caution. 


Title: Re: Once again, what about the scalability issue?
Post by: tvbcof on July 19, 2013, 05:31:05 AM
...
TL/DR:
No such thing as less than equal peers, you are either a peer or your aren't.  In Bitcoin v0.1 100% of nodes were peers today some large x% are in time that x% will shrink.  Peers are still peer but not everyone will want or need to be a peer.  There is a real cost of being a peer and that cost (regardless of scalability improvements) is likely to rise over time.
...

I was trying to be a bit facetious in use terms like 'unequal peers' and '[server|client]-variety-peers'.  Certain of the more technical here might appreciate it, and I suspect that you are among them.

Anyway, I'm glad I err'd on the side of brevity (this time) and allowed you to make the point that the the solution we are migrating towards looks an awful lot like what we see in ACH.  How long it takes to get there (if ever) will I suspect be dictated mainly by the transaction per unit time growth.

I also suspect that you may not be thrilled about this evolution, but very well may see it as a necessary evil.  If so, I respectfully dis-agree.  In my mind it makes the solution not much good for much of anything, and that is particularly the case in light of the Snowden revelations (or 'confirmations' to some of us.)

---

Again though, I find it to be scammy and offensive to prominently label the system 'peer-2-peer' as long as there is a liklihood that it's going SPV, and changing the default recommendation to Multibit is ample evidence that that is exactly the path chosen by those calling the shots.  The main things Bitcoin has going for it are that it is 'first' and it is 'open-source'.  It is honest and appropriate to dwell on those things because they happen to be true.



Title: Re: Once again, what about the scalability issue?
Post by: Come-from-Beyond on July 19, 2013, 07:03:41 AM
If you are a casual user unable to keep the client online why not just use a SPV client.

I thought there wasn't any. Which one would u recommend?


Title: Re: Once again, what about the scalability issue?
Post by: drawingthesun on July 19, 2013, 07:15:16 AM
bitcoin is made for criminals, it wasn't intended to grow big for mainstream transacting

satoshi said this in the early days

Can you show me where Satoshi said this?


Title: Re: Once again, what about the scalability issue?
Post by: DeathAndTaxes on July 19, 2013, 07:18:32 AM
If you are a casual user unable to keep the client online why not just use a SPV client.

I thought there wasn't any. Which one would u recommend?

https://multibit.org/

It is linked to and recommended from bitcoin.org


Title: Re: Once again, what about the scalability issue?
Post by: tvbcof on July 19, 2013, 07:22:00 AM
If you are a casual user unable to keep the client online why not just use a SPV client.

I thought there wasn't any. Which one would u recommend?

Multibit.  It's being promoted as the default now by bitcoin.org.  Or so it seems to me via placement on the web page (and statements on the 'sticky' thread which my very much on-topic post was deleted from since it was not good marketing material apparently.)

  http://bitcoin.org/en/choose-your-wallet (http://bitcoin.org/en/choose-your-wallet)

To Multibit's credit, the strings 'peer' or 'p2p' to not appear obviously anywhere on their site though they still feature front-and-center on bitcoin.org.  Again, it seems pretty scammy to me.



Title: Re: Once again, what about the scalability issue?
Post by: Come-from-Beyond on July 19, 2013, 07:37:48 AM
If you are a casual user unable to keep the client online why not just use a SPV client.

I thought there wasn't any. Which one would u recommend?

https://multibit.org/

It is linked to and recommended from bitcoin.org

Thx, I'll learn about it to make sure I'm not supposed to trust it to be able to use it.


Title: Re: Once again, what about the scalability issue?
Post by: Anon136 on July 19, 2013, 03:45:28 PM
ya i already knew all that ;D. i was just wondering how you thought the bandwidth bottleneck problem would be dealt with.

My guess is a lot depends on how much Bitcoin grows and how quickly.  Also bandwidth is less of an issue unless the developers decide to go to an unlimited block size in the near future.  Even a 5MB block cap would be fairly manageable. 

With the protocol now lets assume a well connected miner needs to transfer a block to peers in 3 seconds to remain competitive.  Say the average miner (with node on a hosted server) has 100 Mbps upload bandwidth and needs to send the block to 20 peers. (100 * 3) / (8 * 20) = 1.875 MB so we probably are fine "as is" up to a 2MB.  With avg tx being 250 bytes that carries us through to 10 to 15 tps (2*1024^2 / 250)

PayPal is roughly 100 tps and using bandwidth in the current inefficient manner would require an excessive amount of bandwidth.  Currently miners broadcast the transactions as part of the block but it isn't necessary, as it is likely peers already have the transaction.  Miners can increase the hit rate by broadcasting tx in the block to peers while the tx is being worked on).  If a peer already knows of the tx then for a block they just need the header (trivial bandwidth) and the list of transaction hashes.  A soft fork to the protocol could be made which allows the broadcasting of just header and tx hash list. If we assume the average tx is 250 bytes and the hash is 32 bytes this means a >80% reduction in bandwidth required during the block transmission window (assumed 3 seconds to remain competitive without excessive orphans).  

Note this doesn't eliminate the bandwidth necessary to relay tx but it makes more efficient use of bandwidth.  Rather than a giant spike in required bandwidth for 3-5 seconds every 600 sec and underutilized bandwidth the other 595 seconds it would even out the spikes getting more accomplished without higher latency.  At 100 tps a block would on average have 60,000 tx.  At 32 bytes each broadcast over 3 seconds to 20 peers would require ~100Mbps.  An almost 8x improvement in miner throughput without increasing latency or peak bandwidth.

For existing non-mining nodes it would be trivial to keep up.  Lets assume the average node relays a tx to 4 of their 8 peers. Nodes could use improved relay logic to check if a peer needs a block before relaying.   To keep up a node just needs to handle the tps plus the overhead of blocks without falling behind (i.e. one 60,000 block in 600 seconds).  Even with only 1Mbps upload it should be possible to keep up [ (100)*(250+32)*(8)*(4) / 1024^2 < 1.0 ].

Now bootstrapping new nodes is a greater challenge.  The block headers are trivial (~4 MB per year) but it all depends on how big blocks are and how far back non-archive nodes will wan't/need to go.  The higher the tps relative to average node's upload bandwidth the longer it will take to boot strap a node to a given depth.

so even with an unlimited block size there would still be a market for transaction inclusion in blocks since miners who attempted to relay a block that was too large would find it orphaned. that's important for network security.

also correct me if im wrong but individual miners wouldnt even need a 100mb connection would they? just the pools.

100 tps is way plenty, even if we assume the load of all credit card companies combined 100 tps would be enough to allow anyone who wanted to do an on-chain transaction to be able to afford it (excluding micro transactions but who cares about that). which is all that matters, we dont need a system where every transaction avoids all counter party risk, what we need is a system where avoiding counter party risk is affordable. 100tps would provide that.

this post put my mind at ease. i mean im already pretty significantly invested in bitcoin because even if there was no solution the the scalability problem bitcoin would still have great utility, its nice to know however that there are solutions.


Title: Re: Once again, what about the scalability issue?
Post by: DeathAndTaxes on July 19, 2013, 04:29:28 PM
Quote
also correct me if im wrong but individual miners wouldnt even need a 100mb connection would they? just the pools.
Correct.  In this context only the entity actually building and broadcasting a block are "miners". So yes the pool server, solo miners (ASICMiner), and setups like p2pool.  Basically if you are broadcasting the block yourself via bitcoind then you are a "miner".   IMHO "pool workers" aren't really miners and calling them that is inaccurate.  They are just computing power providers (CPPs). :) I actually coined that term in a request to FinCEN for an administrative ruling to highlight the distinction between the entity creating new blocks/coins and entities merely providing the resources.  We wouldn't call the power company or ISPs "miners" although electrical power and connectivity are required inputs to creating new coins/blocks.

so even with an unlimited block size there would still be a market for transaction inclusion in blocks since miners who attempted to relay a block that was too large would find it orphaned. that's important for network security.

Yes however the risk is centralization.  It isn't that the network "couldn't" handle unlimited blocks it is that we might not like the consequence of unlimited blocks. As the block sizes gets larger and larger it becomes more difficult to keep orphans to a manageable level.  Orphans directly affect profitability and for pools it is a double hit.  High orphans mean less gross revenue per unit of hashing power but it also means the pool is less competitive so miners move to another pool.  The pools revenue per unit of hashing power is reduced and the overall hashing power is reduced too.  So pools have a very large incentive to manage orphans.    Now it is important to remember it doesn't matter how long it takes for ALL nodes to get your block just how long it takes for a majority of miners to get your block.  The average connect between major miners is what matters.  

If the average connection can handle the average block then it is a non-issue.  However imagine if it can't, and orphan rates go up across the board.  Pools are incentive to reduce orphans so imagine if x pools/major solo miners (enough to make up say 60% of total hashing power) moved all their servers to the same datacenter (or for redundancy the same mirrored sets of datacenters around the world).  Those pools would have essentially unlimited free bandwidth at line speed (i.e. 1Gbps for cheap and 10Gbps for reaosnable cost).  The communication between pools wouldn't be over the open (slow) internet but near instaneous on a private network with boatloads of excessive bandwidth.  This is very simple to accomplish if the miners are in the same datacenter.  The miners just share one LAN connection on a private switch to communicate directly.   Now for these pools a 40MB, 400MB, or even 4000MB block is a non issue.  They can relay it to each other, verify, and start on the next block in a fraction of a second.  Near 0% orphan rates and reduced bandwidth costs.  For other miners however the burden of these large blocks means very high oprhan rates.  How long do you think it will take before CPPs abandon their pool with 5% orphan rates for ones with near zero.  That isn't good for decentralization of the network.

I don't want to sound doomsday but this why BANDWIDTH (not stupid disk space) is the critical resource and one which requires some careful thought when raising the block limit. It is important that average block size not exceed what a miner on an average connection can broadcast to peers in a reasonable amount of time (3 seconds = ~0.5% orphan rate) on an average public internet connection.  Granted those are pretty vague terms.  Obviously 1MB block is below that critical level and 1GB block is obviously above it.  Is 100MB fine today? How about in 10 years? Is 5MB fine today? How about 2MB and doubling every two years? I am not saying I have the answers but that is the kind of thing we (community at large not just Gavin et all) need to think about critically before saying "yeah lets go to unlimited blocks and let the market figure it out".  I have no doubt the market will figure it out however one might not like what end state it reaches.

The good news is bandwidth is still increasing rapidly and the cost per unit of data is falling just as rapidly.  This is true both at the last mile (residential connection) and in datacenters.  So it is an problem which is manageable as long as average block size doesn't eclipse that growth.

Quote
100 tps is way plenty, even if we assume the load of all credit card companies combined 100 tps would be enough to allow anyone who wanted to do an on-chain transaction to be able to afford it (excluding micro transactions but who cares about that). which is all that matters, we dont need a system where every transaction avoids all counter party risk, what we need is a system where avoiding counter party risk is affordable. 100tps would provide that.  this post put my mind at ease. i mean im already pretty significantly invested in bitcoin because even if there was no solution the the scalability problem bitcoin would still have great utility, its nice to know however that there are solutions.

Don't take any of this as "set in stone" it is more like when they ask you "how many windows are there in New York City?" in an interview.  Nobody cares what the exact number, what the interviewer is looking for is what logic will you use to come up with an answer.  If someone thinks my logic is flawed (and it certainly might be) well that is fine and I would love to hear it.  If someone can convince me otherwise that is even better.  However show me some contrary logic. If the counterargument is merely "unlimited or it is censorship" well that doesn't really get us anywhere.

There are four different bandwidth bottlenecks.

Miners
Miners are somewhat unique in that they have to broadcast a block very quickly (say 3 second or less target) to avoid excessive orphans and the loss of revenue that comes with it.  This means their bandwidth requirements are "peaky".  That can be smoothed out somewhat with protocol optimizations however I would expect to see miners run into a problem first.  The good news is it is not that difficult for pools or even solo-miners to setup their bitcoind node in a datacenter where bandwidth is more available and at lower cost.

Non mining full nodes
Full nodes don't need to receive blocks within seconds.  The positive is that as long as they receive them in a reasonable amount of time they can function.  The negative is that unless we start moving to split wallets these nodes are likely on residential connections which have limited upstream bandwidth.  A split wallet is where you have a private bitcoind running in a datacenter and your local wallet has no knowledge of the bitcoind network, it just communicates securely with your private bitcoind.  An example of this today would be electrum client connecting to your own private secure electrum server.

Bootstrapping nodes
Another issue to consider is that if full nodes are close to peak utilization you can't bootstrap new nodes.  Imagine a user has a 10 Mbps connection but the transaction volume is 9 Mbps.  The blockchain is growing at 9Mbps per second so the user is only "gaining" on the end of chain at 1 Mbps.  If the blockchain is say 30 GB it will take not ~7 hours (10Mbps) but 70 hours to catch up.
The good news here is there is some slack because most residential connections have more downstream bandwidth then upstream bandwidth and for synced nodes the upstream bandwidth is the critical resource.  

SPV nodes
The bandwidth requirements for SPV nodes are negligible and unlikely to be an issue which is a good thing however SPV don't provide for network security.  While SPV are important so casual users are not hit with the rising costs of running a full node at the same time we want to ensure that the ability to run a full node for enthusiasts remains an realistic option.  Maybe not everyone can run a full node but it shouldn't be out the reach of a the majority of potential users (i.e. requires 200Mbps symetric low latency connectivity, 20TB of storage, enterprise grade RAID controller, 64GB or RAM, and quad xeon processors).  How many full nodes are needed.  Well more is always better, there is no scenario where more hurts us in any way so it is more likely how few can we "get away with" and stay above that number.  If 100 enough?  Probably not.  1,000? 10,000? 100,000?  10% of users, 1% of users?  I don't have the answer it is just something to think about.  Higher requirements for full nodes means less full nodes but more on chain trust-less transaction volume.  It is a tradeoff, a compromise and there is no perfect answer.  1GB blocks are non-optimal in that they favor volume over decentralization too much.  Staying at 1MB blocks forever is non-optimal in that it favors decentralization over volume too much.  The optimal point is probably somewhere in the middle and the middle will move with technology.  We don't need to get the exact optimal point, there likely is a large range which "works" the goal should be to keep it down the middle of the lane.


Title: Re: Once again, what about the scalability issue?
Post by: Anon136 on July 19, 2013, 08:33:11 PM
good information thanks.

do you think gaven has the leverage/influance/power to remove the block size limit. i dont think he does.


Title: Re: Once again, what about the scalability issue?
Post by: bytemaster on July 19, 2013, 10:49:25 PM
Bandwidth is more critical than disk space for decentralization. 

Parallel chains of with a fixed limit on bandwidth per chain would be nice.

The ability to move value between chains would also be nice.

To move value between chains means a 3rd chain is required that is merge-mined with the other 2 chains.  This 3rd chain is responsible for confirming movements from one chain to the other and vice versa.   The cross-chain would have to be very low bandwidth, say 1% of bandwidth allocated for the main chains. 

With this approach you could have decentralization while still having only one crypto-currency.



Title: Re: Once again, what about the scalability issue?
Post by: Anon136 on July 19, 2013, 11:00:38 PM
Bandwidth is more critical than disk space for decentralization. 

Parallel chains of with a fixed limit on bandwidth per chain would be nice.

The ability to move value between chains would also be nice.

To move value between chains means a 3rd chain is required that is merge-mined with the other 2 chains.  This 3rd chain is responsible for confirming movements from one chain to the other and vice versa.   The cross-chain would have to be very low bandwidth, say 1% of bandwidth allocated for the main chains. 

With this approach you could have decentralization while still having only one crypto-currency.



this is the same solution i came up with when i first started thinking about this issue.


Title: Re: Once again, what about the scalability issue?
Post by: justusranvier on July 20, 2013, 12:56:08 AM
Assume there exists a demand for cryptocurrency-demoninated transactions. This demand will require a certain amount of bandwidth to satisfy.

Suppose the demand is high enough that the entire cryptocurrency ecosystem requires 10 Gbit average bandwidth.

How much does it matter if this 10 Gbit/sec global transaction demand is satisfied by 100 cryptocurrencies or 1 cryptocurrency?

Other factors to consider:

Would the average person prefer to manage a balance of 100 different cryptocurrencies, or would they prefer to hold their savings in a single currency that works everywhere? If you're having trouble figuring this one out, consider whether the average Internet user prefers to have a single global networking standard that makes all resources accessible from any ISP, or if they would prefer to go back to the 1990s walled garden days of AOL, Genie, Compuserve, and other non-interoperable services.

What does the n2 scaling property of the network effect (http://en.wikipedia.org/wiki/Metcalfe's_law) imply for the value of a single network that can handle all 10 Gbit/sec of transactions itself vs 100 networks that can handle 100 mbit/sec each?


Title: Re: Once again, what about the scalability issue?
Post by: bytemaster on July 20, 2013, 01:13:15 AM
Assume there exists a demand for cryptocurrency-demoninated transactions. This demand will require a certain amount of bandwidth to satisfy.

Suppose the demand is high enough that the entire cryptocurrency ecosystem requires 10 Gbit average bandwidth.

How much does it matter if this 10 Gbit/sec global transaction demand is satisfied by 100 cryptocurrencies or 1 cryptocurrency?

Other factors to consider:

Would the average person prefer to manage a balance of 100 different cryptocurrencies, or would they prefer to hold their savings in a single currency that works everywhere? If you're having trouble figuring this one out, consider whether the average Internet user prefers to have a single global networking standard that makes all resources accessible from any ISP, or if they would prefer to go back to the 1990s walled garden days of AOL, Genie, Compuserve, and other non-interoperable services.

What does the n2 scaling property of the network effect (http://en.wikipedia.org/wiki/Metcalfe's_law) imply for the value of a single network that can handle all 10 Gbit/sec of transactions itself vs 100 networks that can handle 100 mbit/sec each?

The network effect has to be balanced with the centralization effect.  What you want is a single currency (for pricing purposes) that scales across many banks (chains) for decentralization purposes.   

You would end up with a situation where transacting within a single chain is almost free (like transacting with a single bank) but transacting between different chains is more expensive (like a wire transfer).    If you want to send someone money you have to know both their bank and account number.   

Of course, because private keys are good on all chains you can send them money on your chain and leave it up to them to move the funds to their normal chain.   Large centralized wallets would be able to integrate all of the chains into one 'account' to give the appearance of a single large chain while still allowing individual users to with 1 Mbit internet connections to participate.

Assuming 10,000 trx / sec at 1024 bytes / trx  it would require an 80 Mbit connection to handle all of the transaction traffic which means that 256 chains could probably handle the entire transaction volume of VISA / Master Card / PayPal and all wire transfers combined, and yet you would only have one chain per state on average.    Of course, a lot of this transaction volume will probably still flow through trusted 3rd parties that enable 'instant' transfers rather than 3+ confirmation transfers.







Title: Re: Once again, what about the scalability issue?
Post by: Zangelbert Bingledack on July 20, 2013, 04:42:01 AM
It is important that average block size not exceed what a miner on an average connection can broadcast to peers in a reasonable amount of time (3 seconds = ~0.5% orphan rate) on an average public internet connection.  Granted those are pretty vague terms.  Obviously 1MB block is below that critical level and 1GB block is obviously above it.  Is 100MB fine today? How about in 10 years? Is 5MB fine today? How about 2MB and doubling every two years? I am not saying I have the answers but that is the kind of thing we (community at large not just Gavin et all) need to think about critically before saying "yeah lets go to unlimited blocks and let the market figure it out".  I have no doubt the market will figure it out however one might not like what end state it reaches.

What makes you think the market won't take care of the average miner, such as by limiting blocksize normatively? Any claim that the market won't take care of something should be justified by specifying how you think the market is broken for that function - because the norm is for markets to work, even if we can't immediately see how.


Title: Re: Once again, what about the scalability issue?
Post by: bytemaster on July 20, 2013, 04:46:06 AM
It is important that average block size not exceed what a miner on an average connection can broadcast to peers in a reasonable amount of time (3 seconds = ~0.5% orphan rate) on an average public internet connection.  Granted those are pretty vague terms.  Obviously 1MB block is below that critical level and 1GB block is obviously above it.  Is 100MB fine today? How about in 10 years? Is 5MB fine today? How about 2MB and doubling every two years? I am not saying I have the answers but that is the kind of thing we (community at large not just Gavin et all) need to think about critically before saying "yeah lets go to unlimited blocks and let the market figure it out".  I have no doubt the market will figure it out however one might not like what end state it reaches.

What makes you think the market won't take care of the average miner, such as by limiting blocksize normatively? Any claim that the market won't take care of something should be justified by specifying how you think the market is broken for that function - because the norm is for markets to work, even if we can't immediately see how.

Considering I am part of this market and am building these kinds of solutions, your claim is 100% right on about the market sorting it out.


Title: Re: Once again, what about the scalability issue?
Post by: DeathAndTaxes on July 20, 2013, 05:06:00 AM
It is important that average block size not exceed what a miner on an average connection can broadcast to peers in a reasonable amount of time (3 seconds = ~0.5% orphan rate) on an average public internet connection.  Granted those are pretty vague terms.  Obviously 1MB block is below that critical level and 1GB block is obviously above it.  Is 100MB fine today? How about in 10 years? Is 5MB fine today? How about 2MB and doubling every two years? I am not saying I have the answers but that is the kind of thing we (community at large not just Gavin et all) need to think about critically before saying "yeah lets go to unlimited blocks and let the market figure it out".  I have no doubt the market will figure it out however one might not like what end state it reaches.

What makes you think the market won't take care of the average miner, such as by limiting blocksize normatively? Any claim that the market won't take care of something should be justified by specifying how you think the market is broken for that function - because the norm is for markets to work, even if we can't immediately see how.

I never said the market wouldn't take care of it, just that you might not like the outcome.

One outcome would be block sizes becomes so large that pools >51% of hashing power run private links to each other in the same datacenter in order to keep orphans out of line.  That drives other pools out of business, the major 3-4 or pools/solo-corps grow even larger and decide to optimization of the network to simply exclude the blocks of any pool/solo not in their organization.  That is one way of the market "taking care" of the problem that demand for higher tx volume exceeds the capabilities of public peer to peer links.  The market doesn't necessarily care about the advantages of decentralization, transparency, fair play, and the "solution" arrived may not be one that most bitcoiners find desirable.  It isn't a prediction, it is a risk that is all I am saying.  As long as avg block size is low relative to bandwidth capacity between miners on open public peer to peer links there is no catalyst for such a move, there is no problem for the market to "solve".


Title: Re: Once again, what about the scalability issue?
Post by: bytemaster on July 20, 2013, 05:09:06 AM
It is important that average block size not exceed what a miner on an average connection can broadcast to peers in a reasonable amount of time (3 seconds = ~0.5% orphan rate) on an average public internet connection.  Granted those are pretty vague terms.  Obviously 1MB block is below that critical level and 1GB block is obviously above it.  Is 100MB fine today? How about in 10 years? Is 5MB fine today? How about 2MB and doubling every two years? I am not saying I have the answers but that is the kind of thing we (community at large not just Gavin et all) need to think about critically before saying "yeah lets go to unlimited blocks and let the market figure it out".  I have no doubt the market will figure it out however one might not like what end state it reaches.

What makes you think the market won't take care of the average miner, such as by limiting blocksize normatively? Any claim that the market won't take care of something should be justified by specifying how you think the market is broken for that function - because the norm is for markets to work, even if we can't immediately see how.

I never said the market wouldn't take care of it, just that you might not like the outcome.

One outcome would be block sizes becomes so large that pools >51% of hashing power run private links to each other in the same datacenter in order to keep orphans out of line.  That drives other pools out of business.  That is the market "taking care of it", the market doens't necessarily care about the advantages of decentralization.  It is a risk that is all I am saying.  It isn't a prediction just pointing out that orphans are lost revenue and if public peer to peer internet links can't keep orphans low there are more centralized "solutions" to that problem.

The market certainly cares about decentralization because decentralization increases competition and profit opportunities.   Bitcoin would have no value if the market didn't value decentralization.   Mining pools already self regulate to prevent growing much over 40%.    And of course, alt-coins will force bitcoin to adapt and evolve.   Bitcoin will evolve or die and that is the market at work.


Title: Re: Once again, what about the scalability issue?
Post by: edmundedgar on July 20, 2013, 10:22:37 AM
Bandwidth is more critical than disk space for decentralization. 

Parallel chains of with a fixed limit on bandwidth per chain would be nice.

The ability to move value between chains would also be nice.

To move value between chains means a 3rd chain is required that is merge-mined with the other 2 chains.  This 3rd chain is responsible for confirming movements from one chain to the other and vice versa.   The cross-chain would have to be very low bandwidth, say 1% of bandwidth allocated for the main chains. 

With this approach you could have decentralization while still having only one crypto-currency.

Strictly speaking I'm not sure you'd even need the ability to be able to move value between chains to shard Bitcoin into multiple chains for the same currency. Multiple chains, all of equal value, coins always stay on the chain where they started, your address works on all the chains. If I pay you some coins, you shouldn't care which chain I've paid you on. Your money is recorded in a permanent ledger, who cares which one?

That said, the ability to move across chains as you describe would be a nice way to guarantee that people don't get funny ideas about coins on Chain X being worth more than coins on Chain Y.


Title: Re: Once again, what about the scalability issue?
Post by: bytemaster on July 20, 2013, 12:29:24 PM
Bandwidth is more critical than disk space for decentralization. 

Parallel chains of with a fixed limit on bandwidth per chain would be nice.

The ability to move value between chains would also be nice.

To move value between chains means a 3rd chain is required that is merge-mined with the other 2 chains.  This 3rd chain is responsible for confirming movements from one chain to the other and vice versa.   The cross-chain would have to be very low bandwidth, say 1% of bandwidth allocated for the main chains. 

With this approach you could have decentralization while still having only one crypto-currency.

Strictly speaking I'm not sure you'd even need the ability to be able to move value between chains to shard Bitcoin into multiple chains for the same currency. Multiple chains, all of equal value, coins always stay on the chain where they started, your address works on all the chains. If I pay you some coins, you shouldn't care which chain I've paid you on. Your money is recorded in a permanent ledger, who cares which one?

That said, the ability to move across chains as you describe would be a nice way to guarantee that people don't get funny ideas about coins on Chain X being worth more than coins on Chain Y.

Each chain would have value based upon the 'network effect'... people already have funny ideas about Chain X and Chain Y ... aka Litecoin.  These ideas are not so funny, because if anyone could launch a new chain then you could print your own money.


Title: Re: Once again, what about the scalability issue?
Post by: edmundedgar on July 20, 2013, 12:57:50 PM
Bandwidth is more critical than disk space for decentralization. 

Parallel chains of with a fixed limit on bandwidth per chain would be nice.

The ability to move value between chains would also be nice.

To move value between chains means a 3rd chain is required that is merge-mined with the other 2 chains.  This 3rd chain is responsible for confirming movements from one chain to the other and vice versa.   The cross-chain would have to be very low bandwidth, say 1% of bandwidth allocated for the main chains. 

With this approach you could have decentralization while still having only one crypto-currency.

Strictly speaking I'm not sure you'd even need the ability to be able to move value between chains to shard Bitcoin into multiple chains for the same currency. Multiple chains, all of equal value, coins always stay on the chain where they started, your address works on all the chains. If I pay you some coins, you shouldn't care which chain I've paid you on. Your money is recorded in a permanent ledger, who cares which one?

That said, the ability to move across chains as you describe would be a nice way to guarantee that people don't get funny ideas about coins on Chain X being worth more than coins on Chain Y.

Each chain would have value based upon the 'network effect'... people already have funny ideas about Chain X and Chain Y ... aka Litecoin.  These ideas are not so funny, because if anyone could launch a new chain then you could print your own money.


Sure, but if you start with a simple technical scalabilty change, presumably by arbitrarily splitting outputs on an existing chain into two shards, it's not obvious that people would treat them as separate networks with different values, rather than different parts of a single network. For most purposes the end user wouldn't need to know anything about different shards - their client would just show them their total balance. The same addresses would work on both shards, and as a vendor when you asked a customer to pay address xyz you wouldn't know which shard their payment would be coming in on, so you'd actually have to work quite hard to set different prices for different shards and communicate the different prices to your customers. If coins on the two shards buy the same thing, the coins on the shards are worth the same.

In that situation it would be fairly mad to try to ascribe different values to different shards, but obviously it's helpful to have a bit of protection against somebody getting a mad idea...


Title: Re: Once again, what about the scalability issue?
Post by: jubalix on July 20, 2013, 02:39:16 PM
doesn't electrum solve this already?


Title: Re: Once again, what about the scalability issue?
Post by: Come-from-Beyond on July 20, 2013, 03:06:38 PM
doesn't electrum solve this already?

Is it trustless?


Title: Re: Once again, what about the scalability issue?
Post by: tvbcof on July 20, 2013, 05:26:48 PM
doesn't electrum solve this already?

Is it trustless?

I've not studied Electrum extensively, but I believe it is 'trustless' in that someone running a server cannot steal your actual Bitcoin.  But the face value in BTC is only one aspect of 'value', and other information gleaned in running a server (Electrum or SPV) can be monetized at the expense of the clients.

Any entity can be put under enough pressure to either comply with government mandates (US government mandates in much of the world) or shut down and lose their infrastructure investment.  All of the PRISM participants chose the latter...it's the only logical choice.  And the only legal choice for corporations.

When the actual foundation of Bitcoin requires resources which exceed what enthusiasts can muster in order to be operated realistically it will be highly susceptible to becoming yet another cog in the state surveillance machine.  It will be either allowed to operate in order to siphon intelligence from the users or effectively shut down as a usable currency solution.  I'll bet on that...in terms of how I manage my BTC stash that is.



Title: Re: Once again, what about the scalability issue?
Post by: DeathAndTaxes on July 20, 2013, 05:31:11 PM
doesn't electrum solve this already?

Is it trustless?

I've not studied Electrum extensively, but I believe it is 'trustless' in that someone running a server cannot steal your actual Bitcoin.  But the face value in BTC is only one aspect of 'value', and other information gleaned in running a server (Electrum or SPV) can be monetized at the expense of the clients.

Any entity can be put under enough pressure to either comply with government mandates (US government mandates in much of the world) or shut down and lose their infrastructure investment.  All of the PRISM participants chose the latter...it's the only logical choice.  And the only legal choice for corporations.

When the actual foundation of Bitcoin requires resources which exceed what enthusiasts can muster in order to be operated realistically it will be highly susceptible to becoming yet another cog in the state surveillance machine.  It will be either allowed to operate in order to siphon intelligence from the users or effectively shut down as a usable currency solution.  I'll bet on that...in terms of how I manage my BTC stash that is.

Learn about bloom filters.


Title: Re: Once again, what about the scalability issue?
Post by: tvbcof on July 20, 2013, 05:37:51 PM
doesn't electrum solve this already?

Is it trustless?

I've not studied Electrum extensively, but I believe it is 'trustless' in that someone running a server cannot steal your actual Bitcoin.  But the face value in BTC is only one aspect of 'value', and other information gleaned in running a server (Electrum or SPV) can be monetized at the expense of the clients.

Any entity can be put under enough pressure to either comply with government mandates (US government mandates in much of the world) or shut down and lose their infrastructure investment.  All of the PRISM participants chose the latter...it's the only logical choice.  And the only legal choice for corporations.

When the actual foundation of Bitcoin requires resources which exceed what enthusiasts can muster in order to be operated realistically it will be highly susceptible to becoming yet another cog in the state surveillance machine.  It will be either allowed to operate in order to siphon intelligence from the users or effectively shut down as a usable currency solution.  I'll bet on that...in terms of how I manage my BTC stash that is.

Learn about bloom filters.

Are you saying that you can use them to make confusing pulls in an attempt to complicate analysis?  Good luck with that.



Title: Re: Once again, what about the scalability issue?
Post by: DeathAndTaxes on July 20, 2013, 05:40:06 PM
doesn't electrum solve this already?

Is it trustless?

I've not studied Electrum extensively, but I believe it is 'trustless' in that someone running a server cannot steal your actual Bitcoin.  But the face value in BTC is only one aspect of 'value', and other information gleaned in running a server (Electrum or SPV) can be monetized at the expense of the clients.

Any entity can be put under enough pressure to either comply with government mandates (US government mandates in much of the world) or shut down and lose their infrastructure investment.  All of the PRISM participants chose the latter...it's the only logical choice.  And the only legal choice for corporations.

When the actual foundation of Bitcoin requires resources which exceed what enthusiasts can muster in order to be operated realistically it will be highly susceptible to becoming yet another cog in the state surveillance machine.  It will be either allowed to operate in order to siphon intelligence from the users or effectively shut down as a usable currency solution.  I'll bet on that...in terms of how I manage my BTC stash that is.

Learn about bloom filters.

Are you saying that you can use them to make confusing pulls in an attempt to complicate analysis?  Good luck with that.

SPV nodes don't pull a single address or transaction.  It is more like "please send me all tx after block X which involve addresses starting with 1A".  The filter can be set as wide as one want, all the way up to everything (send me all tx from all address) as a full node would.  It is a tradeoff between privacy and bandwidth.  There is no need to even limit yourself to when you actually need tx data.  You can send a request for any subset of tx at any time to any node.    One could randomly connect to a different random full node at random times and request random bloom filters which consist of real tx the SPV is interested in and tx it will simply receive and discard.


Title: Re: Once again, what about the scalability issue?
Post by: tvbcof on July 20, 2013, 05:56:24 PM

Learn about bloom filters.

Are you saying that you can use them to make confusing pulls in an attempt to complicate analysis?  Good luck with that.

SPV nodes don't pull a single address or transaction.  It is more like "please send me all tx after block X which involve addresses starting with 1A".  The filter can be set as wide as one want, all the way up to everything (send me all tx from all address) as a full node would.  It is a tradeoff between privacy and bandwidth.  There is no need to even limit yourself to when you actually need tx data.  You can send a request for any subset of tx at any time to any node. 

One could randomly connect to a different random full node at random times and request random bloom filters which consist of real tx the SPV is interested in and tx it will simply receive and discard.

Yup.  That's what I thought you were implying.

Note that when all server operators are induced to provide their meta-data to the NSA, your scheme of connection to random servers will be, if anything, a marker to induce more strident analysis.  The data will be completely centralized.

The addresses of interest to you will almost certainly be able to be extracted via statistical analysis as long as you have a bias.  The best you can do is to apply the same bias to a pool of unrelated addresses.  Then you shoot yourself in the foot because you get tagged if any of those addresses happen to be under scrutiny.

Lastly, if somehow you can devise an effective framework to, as a user, subvert a near total 'free' use of Bitcoin and popularize the technique, the plug is pulled on the solution in total which, being 'supernodes' relegated to operation in datacenters, it fairly straightforward to do (ask Kim Dotcom about this works.)



Title: Re: Once again, what about the scalability issue?
Post by: DeathAndTaxes on July 20, 2013, 05:59:13 PM

Learn about bloom filters.

Are you saying that you can use them to make confusing pulls in an attempt to complicate analysis?  Good luck with that.

SPV nodes don't pull a single address or transaction.  It is more like "please send me all tx after block X which involve addresses starting with 1A".  The filter can be set as wide as one want, all the way up to everything (send me all tx from all address) as a full node would.  It is a tradeoff between privacy and bandwidth.  There is no need to even limit yourself to when you actually need tx data.  You can send a request for any subset of tx at any time to any node.  

One could randomly connect to a different random full node at random times and request random bloom filters which consist of real tx the SPV is interested in and tx it will simply receive and discard.

Yup.  That's what I thought you were implying.

Note that when all server operators are induced to provide their data to the NSA, your scheme of connection to random servers will be, if anything, a marker to induce more strident analysis.  The data will be completely centralized.

The addresses of interest to you will almost certainly be able to be extracted via statistical analysis as long as you have a bias.  The best you can do is to apply the same bias to a pool of unrelated addresses.  Then you shoot yourself in the foot because you get tagged if any of those addresses happen to be under scrutiny.

Lastly, if somehow you can devise an effective framework to, as a user, subvert a near total 'free' use of Bitcoin and popularize the technique, the plug is pulled on the solution in total which, being 'supernodes' relegated to operation in datacenters, it fairly straightforward to do (ask Kim Dotcom about this works.)



Yeah nonsense.  Some node running in Russia for example is going to be forced to give their data to the NSA?  I showed above how tx volume up to 100 tps is possible on existing hardware (run of the mill VPS).  Moore's law is still occuring, 1,000 tps+ in a decade is certainly possible on an even modest server with decent connectivity.  

Now is it likely we will reach a point where most users can't run a full node on a non-dedicated computer on a residential connection?  Probably given enough growth (at least 2-3 magnitudes).  However it is a logical fallacy to jump from that to "there will only be a handful of nodes controlled by the NSA".  That is just a figment of your imagination.  

We aren't talking about facebook scale dedicated datacenters.  A colocated server with 100 Mbps connectivity is more than sufficient.  Nothing crazy or exotic just decent amount of computing power, storage, and memory.  The idea that there can't be tens of thousands of nodes in hundreds of countries under a scenario like that is highly implausible.  If your paranoid then build a SPV client which only connects to different random nodes in US unfriendly countries and operates over TOR.


Title: Re: Once again, what about the scalability issue?
Post by: tvbcof on July 20, 2013, 06:58:25 PM

Yeah nonsense.  Some node running in Russia for example is going to be forced to give their data to the NSA?  I showed above how tx volume up to 100 tps is possible on existing hardware (run of the mill VPS).  Moore's law is still occuring, 1,000 tps+ in a decade is certainly possible on an even modest server with decent connectivity.  

Now is it likely we will reach a point where most users can't run a full node on a non-dedicated computer on a residential connection?  Probably given enough growth (at least 2-3 magnitudes).  However it is a logical fallacy to jump from that to "there will only be a handful of nodes controlled by the NSA".  That is just a figment of your imagination.  

We aren't talking about facebook scale dedicated datacenters.  A colocated server with 100 Mbps connectivity is more than sufficient.  Nothing crazy or exotic just decent amount of computing power, storage, and memory.  The idea that there can't be tens of thousands of nodes in hundreds of countries under a scenario like that is highly implausible.  If your paranoid then build a SPV client which only connects to different random nodes in US unfriendly countries and operates over TOR.


You are going to put your trust in the likes of Putin and the Chinese Central Party bearing in mind that an alternate monetary system threatens the control of their economies as much as it does the West (or would if it goes anywhere?)  OK, you do that.  For my part, I suspect that monitoring the Bitcoin network will be one of the areas where there is genuine cooperation as the benefits to the respective governments are huge.

Megauploads was just some racks of servers here and there with names like Carpathia on the cages.  Carpathia (and I suppose others) actually owned the gear, the likes of Equinix owned the floor-space, and the likes of Sprint owned the fiber and switches.  It took all of 5 minutes to halt the system (which was, IIRC, about 2% or 4% of the public Internet at one time.)  Pressure on anyone of the three providers would be sufficient to put Megauploads out of business for a period of time, and the legal justification was pretty weak compared to the (very real) national security implications of running a competitive alternate monetary solution.

As for TOR, all the government would have to do to severely limit it would be to stop funding it for Christsake.  Given the framework of backbone taps I'm even more convinced that it is nothing much more than a honeypot even when I try to convince myself otherwise.



Title: Re: Once again, what about the scalability issue?
Post by: DeathAndTaxes on July 20, 2013, 07:00:42 PM

Yeah nonsense.  Some node running in Russia for example is going to be forced to give their data to the NSA?  I showed above how tx volume up to 100 tps is possible on existing hardware (run of the mill VPS).  Moore's law is still occuring, 1,000 tps+ in a decade is certainly possible on an even modest server with decent connectivity. 

Now is it likely we will reach a point where most users can't run a full node on a non-dedicated computer on a residential connection?  Probably given enough growth (at least 2-3 magnitudes).  However it is a logical fallacy to jump from that to "there will only be a handful of nodes controlled by the NSA".  That is just a figment of your imagination. 

We aren't talking about facebook scale dedicated datacenters.  A colocated server with 100 Mbps connectivity is more than sufficient.  Nothing crazy or exotic just decent amount of computing power, storage, and memory.  The idea that there can't be tens of thousands of nodes in hundreds of countries under a scenario like that is highly implausible.  If your paranoid then build a SPV client which only connects to different random nodes in US unfriendly countries and operates over TOR.


You are going to put your trust in the likes of Putin and the Chinese Central Party bearing in mind that an alternate monetary system threatens the control of their economies as much as it does the West (or would if it goes anywhere?)  OK, you do that.  For my part, I suspect that monitoring the Bitcoin network will be one of the areas where there is genuine cooperation as the benefits to the respective governments are huge.

Megauploads was just some racks of servers here and there with names like Carpathia on the cages.  Carpathia (and I suppose others) actually owned the gear, the likes of Equinix owned the floor-space, and the likes of Sprint owned the fiber and switches.  It took all of 5 minutes to halt the system (which was, IIRC, about 2% or 4% of the public Internet at one time.)  Pressure on anyone of the three providers would be sufficient to put Megauploads out of business for a period of time, and the legal justification was pretty weak compared to the (very real) national security implications of running a competitive alternate monetary solution.

As for TOR, all the government would have to do to severely limit it would be to stop funding it for Christsake.  Given the framework of backbone taps I'm even more convinced that it is nothing much more than a honeypot even when I try to convince myself otherwise.



Your right it is doomed.  Uninstall the client now before you are assassinated.


Title: Re: Once again, what about the scalability issue?
Post by: tvbcof on July 20, 2013, 07:23:15 PM
Your right it is doomed.  Uninstall the client now before you are assassinated.

I think there is a pretty good chance that it is good for at least one more native pop where I can capitalize big time.  If not, so be it.

Without the benefit of real P2P, and with all the hassles of keeping keys secure, latency, etc, I cannot see Bitcoin being a competitive exchange currency in the long run.  Most people don't care about this shit and it is human nature to put faith in 'more powerful' entities than one senses in themselves.  So Bitcoin will be competing with more fully centralized entities which can easily eliminate the hassles associated with Bitcoin's early P2P efforts.  And with the same cryptographic strengths (and probably more) that Bitcoin has.

Even in the pool of early Bitcoin adopters and developers who have a generally higher grasp of technology and a generally less confidence in central governments, it is still rare to find people who care greatly about real P2P and pro-active threat mitigation.  At this point I've no real faith in the ecosystem to produce a solution I can have real confidence in.  There is a potential for Bitcoin to be a subsidized carrier for development of a robust solution which meets the threats of our day (and tomorrow), but I'm more interested in cutting up the logs in my field into lumber and beams than working on that at the moment and am headed out the door right now to work on that problem.  So, I'll talk at ya later.



Title: Re: Once again, what about the scalability issue?
Post by: Come-from-Beyond on August 26, 2013, 02:39:40 PM
If you are a casual user unable to keep the client online why not just use a SPV client.

I thought there wasn't any. Which one would u recommend?

https://multibit.org/

It is linked to and recommended from bitcoin.org

Thx, I'll learn about it to make sure I'm not supposed to trust it to be able to use it.

Multibit can't be used without trusting to a 3rd party. The blockchain size crossed 9 GB mark and keeps growing...


Title: Re: Once again, what about the scalability issue?
Post by: TippingPoint on August 26, 2013, 03:36:09 PM

Any entity can be put under enough pressure to either comply with government mandates (US government mandates in much of the world) or shut down and lose their infrastructure investment.  All of the PRISM participants chose the latter...it's the only logical choice.  And the only legal choice for corporations.


This is worth remembering ^ ^

https://upload.wikimedia.org/wikipedia/commons/c/c7/Prism_slide_5.jpg


Title: Re: Once again, what about the scalability issue?
Post by: tvbcof on August 26, 2013, 04:56:50 PM

Any entity can be put under enough pressure to either comply with government mandates (US government mandates in much of the world) or shut down and lose their infrastructure investment.  All of the PRISM participants chose the latter...it's the only logical choice.  And the only legal choice for corporations.


This is worth remembering ^ ^

 - snip, supposed NSA slide about PRISM - 

I bolded the part which I feel is especially important.  Most for-profit corporations under US law are legally compelled to maximize shareholder profit.  This is wholly incompatible with bucking the state's desire for the organization to participate in surveillance and incurring the expense of non-compliance.

In the corporations I've worked in, I've known a minority of people who are deeply disgusted by the police state apparatus being constructed,  and a majority to are ambivalent and/or willfully ignorant about it.  I've never met anyone who was in favor of it (though I mostly worked in engineering.)  In the end, it does not matter.  Upper management who draw plan the direction of the corporation's trajectory comply with the directive of maximizing shareholder profit.  If they fail, they are pushed out of management positions.

To me this is classic 'merger of state and corporate power' which Mussolini used when he preferred the term 'corporatism' to 'facsism' in seeking a characterization of his system of government in Italy.  It is also why even back in the 1700's there was considerable concern about the utility and dangers of 'corporations'.  It is also worth noting, however, that the definitions of 'corporation' has changed as society and business has evolved, but the basic structure of cooperation between parties in a corporation remains.  And more importantly, the concerns about the priorities of the corporate and how much harm/good they do for a society at large associated with these priorities, remains an interesting question.



Title: Re: Once again, what about the scalability issue?
Post by: TippingPoint on August 26, 2013, 05:27:01 PM

Any entity can be put under enough pressure to either comply with government mandates (US government mandates in much of the world) or shut down and lose their infrastructure investment.  All of the PRISM participants chose the latter...it's the only logical choice.  And the only legal choice for corporations.


This is worth remembering ^ ^

 - snip, supposed NSA slide about PRISM - 



Do you doubt its authenticity?


Title: Re: Once again, what about the scalability issue?
Post by: tvbcof on August 26, 2013, 06:13:48 PM

Any entity can be put under enough pressure to either comply with government mandates (US government mandates in much of the world) or shut down and lose their infrastructure investment.  All of the PRISM participants chose the latter...it's the only logical choice.  And the only legal choice for corporations.


This is worth remembering ^ ^

 - snip, supposed NSA slide about PRISM -  



Do you doubt its authenticity?


I doubt almost everything to some degree.  I hold open the hypothesis that the entire Snowden episode is a staged operation in fact.

Early on in the release of the PRISM docs, some commentators with a supposed insider background said that it is an unusual document and not representative of the form which one would expect at this level.  Further, there are a ton of minor jerk-off businesses milking the rapidly expanding national security sphere and they come up with all kinds of puffed up marketing material and what-not.  In fact one scheme was to plant various stories, false or not, with Greenwald (mentioned by name, and without his knowing cooperation) in order to achieve certain goals.  This was discovered in the HB Gary Federal e-mail hack.

That said, I feel it most likely that Snowden is the real McCoy and the PRISM doc is legit.  It's just that since I don't 'known' this I try to be careful in my wording.

 - edit in:

Cass Sunstein is in the news recently to chair the 'independent' panel on the state surveillance system.  He is a public proponent of planting 'conspiracy theories' in an attempt to discredit organic ones or achieve other objectives.  I've not studied his work yet, but it sounds interesting so perhaps I'll do it as a winter project when the weather gets shitty.

Anyway, I would say that one is a world-class fool to take almost anything at face value, or to consider almost anything as 'fact'.  There always has been a lot of subterfuge in this world when it comes to centralized power means and mechanisms and it is probably more true today than ever...but we live in the 'information age' where it is much more possible to independently analyze these things.  I am lucky to be comfortable to not 'know' almost anything for sure and to be able to weigh observations which are against what I believe or wish to be true.  I find it entertaining.



Title: Re: Once again, what about the scalability issue?
Post by: Come-from-Beyond on September 29, 2013, 02:50:59 PM
Blockchain size has crossed 10000 MB mark. I think it's time to close this thread until we see 20000 MB...


Title: Re: Once again, what about the scalability issue?
Post by: tvbcof on September 29, 2013, 09:44:00 PM
Blockchain size has crossed 10000 MB mark. I think it's time to close this thread until we see 20000 MB...

Actually, the thread had been pretty quite until you piped up.

For my part, I'm still waiting for a good read on the economics of transaction fees.

Growth at 7 TPS or there abouts is eminently manageable while retaining a realistic P2P structure (possibly part of the reason Satoshi chose it?)  Even notably higher transaction rates would be manageable and probably defensible in case of most significant forms of attack, but the key is that things have to be predictable in order to facilitate good engineering and planning for those who have an interest in trying to help support the system.



Title: Re: Once again, what about the scalability issue?
Post by: Mike Hearn on September 30, 2013, 10:46:44 AM
Can we stop spreading incorrect information please?

MultiBit does not rely on a trusted third party. That's the point of it - it reads the block chain.

Satoshi put a 1mb block size limit in place to avoid people creating giant "troll blocks" early on when mining was easy. It was a part of a series of quick anti-DoS hacks he put in, and he talked about removing the limit when the software scaled better. Indeed he talked about Bitcoin scaling to VISA-size transaction loads right from the start of the project. 1mb wasn't some super meaningful design choice he made in order to achieve some particular economic outcome.

In fact, here's a quote from an email he sent me on the matter back in 2010:

Quote
A higher limit can be phased in once we have actual use closer to the limit and make sure it's working OK.

Eventually when we have client-only implementations, the block chain size won't matter much.  Until then, while all users still have to download the entire block chain to start, it's nice if we can keep it down to a reasonable size.

With very high transaction volume, network nodes would consolidate and there would be more pooled mining and GPU farms, and users would run client-only.  With dev work on optimising and parallelising, it can keep scaling up.

Whatever the current capacity of the software is, it automatically grows at the rate of Moore's Law, about 60% per year.

We actually do have client-only implementations these days, which is why Gavin and I have been arguing to increase the block size. It isn't as important as it once was.


Title: Re: Once again, what about the scalability issue?
Post by: Come-from-Beyond on September 30, 2013, 11:14:12 AM
Can we stop spreading incorrect information please?

MultiBit does not rely on a trusted third party. That's the point of it - it reads the block chain.

I ought to admit that I know almost nothing about MultiBit. Could you explain how it's possible to verify a transaction if you don't have a whole blockchain? You have to check the chain of ownership till you meet the block the coins were generated in. AND you have to check that none of the satoshis from these coins were double-spent.


Title: Re: Once again, what about the scalability issue?
Post by: bitcoin44me on September 30, 2013, 11:20:38 AM
I ought to admit that I know almost nothing about MultiBit. Could you explain how it's possible to verify a transaction if you don't have a whole blockchain? You have to check the chain of ownership till you meet the block the coins were generated in. AND you have to check that none of the satoshis from these coins were double-spent.


I don't know how it works, but you can probably make a php script to check blockchain.info and how much btc have every address / how much confirmations have a transaction and so on.
Like all btc websites (dices, mcxNOW, ...)


Title: Re: Once again, what about the scalability issue?
Post by: Come-from-Beyond on September 30, 2013, 11:21:55 AM
I ought to admit that I know almost nothing about MultiBit. Could you explain how it's possible to verify a transaction if you don't have a whole blockchain? You have to check the chain of ownership till you meet the block the coins were generated in. AND you have to check that none of the satoshis from these coins were double-spent.


I don't know how it works, but you can probably make a php script to check blockchain.info and how much btc have every address / how much confirmations have a transaction and so on.
Like all btc websites (dices, mcxNOW, ...)

Who provides data for that and how did you make sure that this blockchain is legit (has the highest cumulative difficulty)?


Title: Re: Once again, what about the scalability issue?
Post by: edmundedgar on September 30, 2013, 12:04:41 PM
Can we stop spreading incorrect information please?

MultiBit does not rely on a trusted third party. That's the point of it - it reads the block chain.

I ought to admit that I know almost nothing about MultiBit. Could you explain how it's possible to verify a transaction if you don't have a whole blockchain? You have to check the chain of ownership till you meet the block the coins were generated in. AND you have to check that none of the satoshis from these coins were double-spent.

See parts 7 and 8 of Satoshi's whitepaper:
http://bitcoin.org/bitcoin.pdf


Title: Re: Once again, what about the scalability issue?
Post by: Come-from-Beyond on September 30, 2013, 12:07:05 PM
See parts 7 and 8 of Satoshi's whitepaper:
http://bitcoin.org/bitcoin.pdf

Thx. Does MultiBit use this approach?


Title: Re: Once again, what about the scalability issue?
Post by: edmundedgar on September 30, 2013, 12:08:56 PM
See parts 7 and 8 of Satoshi's whitepaper:
http://bitcoin.org/bitcoin.pdf

Thx. Does MultiBit use this approach?

Yes, IIUC.


Title: Re: Once again, what about the scalability issue?
Post by: Mike Hearn on September 30, 2013, 12:19:38 PM
Or a bit closer, read my stickied thread in this very forum:

https://bitcointalk.org/index.php?topic=252937.0


Title: Re: Once again, what about the scalability issue?
Post by: hayek on September 30, 2013, 01:51:32 PM
Could someone please explain to me why this matters again?

Right, right, I get that the number of transactions is increasing but does this, again, just boil down to "It takes new people for ever to download the block chain"?

I just don't see how a 10G blockchain isn't scalable when most people run 1TB+ disks.


Title: Re: Once again, what about the scalability issue?
Post by: tvbcof on September 30, 2013, 06:14:48 PM
Can we stop spreading incorrect information please?
I guess you are addressing me since I was the only poster in the recent activity of this thread.  Can you point out the 'incorrect information' I supposedly spread?

MultiBit does not rely on a trusted third party. That's the point of it - it reads the block chain.

Alas, Java VM joins Windows and Android as systems I stay away from for security work.  It at least has been possible at some point to compile the JVM myself but it's a hassle and never worked right on my system.  I trust almost no pre-complied systems from large companies (especially Oracle) and that had been the case since before PRISM was disclosed.  But is even more so now.

Satoshi put a 1mb block size limit in place to avoid people creating giant "troll blocks" early on when mining was easy. It was a part of a series of quick anti-DoS hacks he put in, and he talked about removing the limit when the software scaled better. Indeed he talked about Bitcoin scaling to VISA-size transaction loads right from the start of the project. 1mb wasn't some super meaningful design choice he made in order to achieve some particular economic outcome.

It would not be the first time that someone fucked up and accidentally did the right thing.  My point is that no matter what the size, it is useful to bump into it and let the economics of transaction fees work for a while to find out empirically how the system functions.  This will allow certain systems to build up around the expectation of a more realistic mode of self sufficiency.

I think it pretty fair to say the 'satoshi' also elaborated on an end-point of the system being supported by transaction fees.  As opposed to, say, users being milked for the PII intelligence value they provide (as has become a reality for e-mail and www protocols.)  'satoshi' may very well have anticipated and welcomed this eventuality and you would know better than I, but I will say that if so he fucked up by making it a hard-fork to adjust the system to this trajectory.

Put another way, I don't think that someone 'supports' the kind of system I would like to see Bitcoin become by being exploited in the same manner that web service users are under many other protocols.

In fact, here's a quote from an email he sent me on the matter back in 2010:

Quote
A higher limit can be phased in once we have actual use closer to the limit and make sure it's working OK.

Eventually when we have client-only implementations, the block chain size won't matter much.  Until then, while all users still have to download the entire block chain to start, it's nice if we can keep it down to a reasonable size.

With very high transaction volume, network nodes would consolidate and there would be more pooled mining and GPU farms, and users would run client-only.  With dev work on optimising and parallelising, it can keep scaling up.

Whatever the current capacity of the software is, it automatically grows at the rate of Moore's Law, about 60% per year.

We actually do have client-only implementations these days, which is why Gavin and I have been arguing to increase the block size. It isn't as important as it once was.

I am not doubting the veracity of your private e-mails with 'satoshi', and don't doubt that you flipped out when you saw the 'quick hack' (sans commit comments) and provoked this note, but there is an amusing parallel here between how everything the NSA does is legal due to secret laws which nobody else can see.  There is also a similarly amusing parallel between Jesus vanishing only being accessible to persons with special communications powers.  Again, I don't doubt that you posses such powers...it's more 'satoshi's divinity which I find debatable.

 - Edit in:  A hypothesis which also strikes me as possible is that 'satoshi' has been to some extent 'using' you (Mike).  I mean you have fearsome technical skills and drive, and also a positioning within the industry which has value of various sorts.  Your participation could be leveraged effectively by some management of the possible points of discontinuity on the evolution and endpoint of the Bitcoin system.



Title: Re: Once again, what about the scalability issue?
Post by: Peter Todd on September 30, 2013, 06:28:31 PM
Can we stop spreading incorrect information please?

MultiBit does not rely on a trusted third party. That's the point of it - it reads the block chain.

I ought to admit that I know almost nothing about MultiBit. Could you explain how it's possible to verify a transaction if you don't have a whole blockchain? You have to check the chain of ownership till you meet the block the coins were generated in. AND you have to check that none of the satoshis from these coins were double-spent.

Indeed.

Users have to understand that SPV does rely on a quorum of trusted third parties, specifically miners, which makes it significantly less secure. For instance I could temporarily take over a large amount of hashing power by hacking a large pool and I can use that to profitably rip off your business and many others with fake confirmations. If I manage to hack >50% of hashing power - which would require nothing more than hacking into about 3 pools right now - I don't even need to isolate your SPV node from the network. (which is also easy because SPV nodes have to trust their peers) I may even do it just to push the price of Bitcoin down and profit from shorting it, or I may have other more nefarious motives.

Similarly MultiBit and other SPV wallets provide no protection against fraudulent inflation yet, and the architecture they encourage puts the censorship of transactions in the hands of a very few.


Title: Re: Once again, what about the scalability issue?
Post by: Mike Hearn on October 01, 2013, 09:42:18 AM
Alas, Java VM joins Windows and Android as systems I stay away from for security work.  It at least has been possible at some point to compile the JVM myself but it's a hassle and never worked right on my system.  I trust almost no pre-complied systems from large companies (especially Oracle) and that had been the case since before PRISM was disclosed.  But is even more so now.

I was able to compile OpenJDK8 on MacOS X a few weeks ago, at least. But there are other JVMs that are not as fast, which are also open source. Where do you get your compiler and operating system binaries from, I wonder?

I am not doubting the veracity of your private e-mails with 'satoshi', and don't doubt that you flipped out when you saw the 'quick hack' (sans commit comments) and provoked this note, but there is an amusing parallel here between how everything the NSA does is legal due to secret laws which nobody else can see.  There is also a similarly amusing parallel between Jesus vanishing only being accessible to persons with special communications powers.  Again, I don't doubt that you posses such powers...it's more 'satoshi's divinity which I find debatable.

At the time we had these conversations I didn't know he would disappear. That thread is from the end of 2010. I think it's still worth quoting him on this especially when people assign more meaning to design decisions than can really be justified. Unfortunately GMX doesn't (or didn't back then) use DKIM so I have no signatures or any other proof, but anyway it shouldn't matter, it's obvious from things he posted earlier that there wasn't intended to be any hard/small limits on volume.

I actually don't think I was around when Satoshi put the 1mb limit in place. I played with Bitcoin back in early 2009 but nobody used it so I lost interest and came back later. Back then he routinely made big or hard forking changes to the protocol in giant commits that mixed many changes together and had no useful descriptions. It was still his personal toy/prototype thing, so he got away with things we wouldn't be able to do today.


Title: Re: Once again, what about the scalability issue?
Post by: justusranvier on October 01, 2013, 10:43:00 AM
If the transaction rate doesn't ramp up soonish to levels far beyond what a 1 MB block can support the network will be in trouble. ASIC companies are pumping out more hashing power every day but the miners are all competing for the same ~25 BTC / block. Unless the exchange rate ramps up as fast as the difficulty, 25 BTC/block won't be able to economically support the mining infrastructure for very long. Then you'll see a huge overshoot effect where a lot of miners drop out and leave the network vulnerable, or even worse, sell off their now-unprofitable ASICs to someone who uses them to attack the network.

The block reward needs to grow, and the only way that happens is via transaction fees, and the only way to get significant amounts of transaction fees is to process lots of transactions.


Title: Re: Once again, what about the scalability issue?
Post by: tvbcof on October 01, 2013, 03:15:16 PM
Alas, Java VM joins Windows and Android as systems I stay away from for security work.  It at least has been possible at some point to compile the JVM myself but it's a hassle and never worked right on my system.  I trust almost no pre-complied systems from large companies (especially Oracle) and that had been the case since before PRISM was disclosed.  But is even more so now.

I was able to compile OpenJDK8 on MacOS X a few weeks ago, at least. But there are other JVMs that are not as fast, which are also open source. Where do you get your compiler and operating system binaries from, I wonder?


Wonder no more;  I build both from source code and released distributions along with organized and auditable patches.  I've done things this way since I started using FreeBSD in the 90's.

When I run bitcoind these days I run a build which includes a compilation of openssl, boost, and berkeleydb and I can select the source distribution and apply any patches I like along the way.  Further, I can re-build with any code tweeks to any of these items in a matter of minutes and have a new binary running seconds after that.

Actually, I don't run bitcoind currently.  I'm behind a satellite system and the block chain is twice the size of my total monthly download allotment (for which I pay $80/mo.)  Just the other day I was looking around for a VPS but unfortunately the block chain itself is starting to exceed the size that low tier VPS's support.

Speaking of, any news on the block chain compression front as mentioned in the whitepaper?  That was kind of a selling point to me (recognizing immediately the potential scaling issue.)  Last I heard (from you IIRC) ~sipa was trying to have a life and working on it sporadically.



Title: Re: Once again, what about the scalability issue?
Post by: peonminer on October 01, 2013, 03:26:43 PM
The world will move to off blockchain wallets and sites like inputs.io


Title: Re: Once again, what about the scalability issue?
Post by: Mike Hearn on October 01, 2013, 03:50:59 PM
The thing I was referring to was not block chain compression (that's not making a big difference) but rather pruning, i.e. deleting of old data from disk. There has been no progress on that front. Sipa worked on other things instead.

If your problem is you can't afford to even download 10G of data then you're better off using an SPV client instead. I'm pretty sure almost any VPS could run bitcoind - where did you find a VPS that has <20G of disk and bandwidth? If you ran one on a VPS you could use an SPV client locally that connects to it, and that'd be an equivalent security level.


Title: Re: Once again, what about the scalability issue?
Post by: Come-from-Beyond on October 01, 2013, 04:19:30 PM
The world will move to off blockchain wallets and sites like inputs.io

Why wait? Do it now, use a bank!

+21000000


Title: Re: Once again, what about the scalability issue?
Post by: DeathAndTaxes on October 01, 2013, 04:27:12 PM
Users have to understand that SPV does rely on a quorum of trusted third parties, specifically miners, which makes it significantly less secure. For instance I could temporarily take over a large amount of hashing power by hacking a large pool and I can use that to profitably rip off your business and many others with fake confirmations. If I manage to hack >50% of hashing power - which would require nothing more than hacking into about 3 pools right now - I don't even need to isolate your SPV node from the network. (which is also easy because SPV nodes have to trust their peers) I may even do it just to push the price of Bitcoin down and profit from shorting it, or I may have other more nefarious motives.

Um if you could do all that well you could rip off full nodes as well.  Only the portion related to peer selection is relevant.  SPV are more vulnerable to an isolation attack.   SPV nodes should have very good peer selection algorithms.


Title: Re: Once again, what about the scalability issue?
Post by: DeathAndTaxes on October 01, 2013, 04:32:59 PM
If your problem is you can't afford to even download 10G of data then you're better off using an SPV client instead. I'm pretty sure almost any VPS could run bitcoind - where did you find a VPS that has <20G of disk and bandwidth? If you ran one on a VPS you could use an SPV client locally that connects to it, and that'd be an equivalent security level.

This is a good point and one that I think will become more common in the future.   In residential scenarios there is something called "the last mile".  It is relatively easy to drop a multi-gigabit data connection into a neighborhood but the installation and maintenance of the last mile into thousands of residences (which you will only collect $30 to $100 monthly) is a bottleneck.  The good news is that datacenter bandwidth is a magnitude cheaper and continues to get cheaper at a faster rate. 

Full node has (relatively) high bandwidth requirements
Users personal tx and confirmations have low bandwidth requirements.
Move the high bandwidth portion to where bandwidth is both cheap and available.

I imagine we will even see the development of ultra light clients which communicate to a specific trusted peer (probably one run by the user).  For example a user could have bitcoin wallets on mobile phone, laptop, desktop, and some hardware device which all communicate via encrypted and authenticated channel to a full node peer operated by the user. Best of both worlds.


Title: Re: Once again, what about the scalability issue?
Post by: tvbcof on October 01, 2013, 04:35:39 PM
The thing I was referring to was not block chain compression (that's not making a big difference) but rather pruning, i.e. deleting of old data from disk. There has been no progress on that front. Sipa worked on other things instead.

That is what I meant.  It is to bad that it's not being worked on.  Again, it was one of the things in the whitepaper which gave me some hope for the sustainability of the system as a more realistic community maintained solution.  I've suspected for some time now that once the perception of possible scalability in this respect was implanted, the goal of that text was achieved.

If your problem is you can't afford to even download 10G of data then you're better off using an SPV client instead.

It's easy to get spoiled when working on high capacity networks and neglect to consider the various use-cases.  I've never argued that POTS or satellite should be supported as a baseline, but have argued that if they could it would result in a system which was much more difficult to subvert.  I personally don't think it is worth the tradeoff though.

As soon as a simple SPV client implemented in a language which does not effectively require un-trusted dependencies exists I likely use it for certain things.  Of course it adds no value to the Bitcoin network other than 'headcount' perhaps so I guess that the operators of the network will be extracting value from clients like this in other ways.

I'm pretty sure almost any VPS could run bitcoind - where did you find a VPS that has <20G of disk and bandwidth?

I typically try to plan my infrastructure investments (which are often more about time than money) for a reasonable life expectancy.  If the resource utilization rate is predictable then this is possible.  If, say, the transaction rate could change on a whim and necessitate a rapid escalation of the resources I need to deploy such a system then there is less likilyhood that I will bother in the first place.  I doubt that I am alone in such a calculus.

 - edit: to answer your question, I was looking for 1) a system which would allow me to compile my own OS, and 2) in a jurisdiction which was suitible miffed at the NSA spying that they may have take real steps to prevent it and have the technical expertise to do so.  I found myself here:  http://nqhost.com/freebsd-vps.html.  I also considered AWS micro instances which may or may not work.

If you ran one on a VPS you could use an SPV client locally that connects to it, and that'd be an equivalent security level.

If I ran bitcoind on a VPS there would be little or no need to run anything locally.  At least on a VPS that I could have some confidence in from a security perspective (a big question mark in my mind at this time.)


Title: Re: Once again, what about the scalability issue?
Post by: Peter Todd on October 01, 2013, 05:08:19 PM
The thing I was referring to was not block chain compression (that's not making a big difference) but rather pruning, i.e. deleting of old data from disk. There has been no progress on that front. Sipa worked on other things instead.

That is what I meant.  It is to bad that it's not being worked on.  Again, it was one of the things in the whitepaper which gave me some hope for the sustainability of the system as a more realistic community maintained solution.  I've suspected for some time now that once the perception of possible scalability in this respect was implanted, the goal of that text was achieved.

FWIW Ive been asked to write a (funded) proposal to make improvements to UTXO scalability for Litecoin. I'm still working on the proposal - and won't be able to finish it until I finish a semi-related job for another client - but it looks like it will be an implementation of pruning with nodes also storing some amount of archival data for bootstrapping. I've also got some more complex changes that would for the most part eliminate concerns about UTXO growth entirely at the expense of a soft-fork. (though I have no idea if the changes would be politically acceptable in Bitcoin) I'm still thinking through the latter however, and how a pruning implementation would work in conjunction with it; I'll publish soonish. Ideal end-goal would be to eliminate the notion of a SPV client in exchange for a model where everyone validates/contributes between 0% and 100% of the blockchain resource effort and can pick that % smoothly.


Title: Re: Once again, what about the scalability issue?
Post by: tvbcof on October 01, 2013, 05:39:43 PM
FWIW Ive been asked to write a (funded) proposal to make improvements to UTXO scalability for Litecoin. I'm still working on the proposal - and won't be able to finish it until I finish a semi-related job for another client - but it looks like it will be an implementation of pruning with nodes also storing some amount of archival data for bootstrapping. I've also got some more complex changes that would for the most part eliminate concerns about UTXO growth entirely at the expense of a soft-fork. (though I have no idea if the changes would be politically acceptable in Bitcoin) I'm still thinking through the latter however, and how a pruning implementation would work in conjunction with it; I'll publish soonish. Ideal end-goal would be to eliminate the notion of a SPV client in exchange for a model where everyone validates/contributes between 0% and 100% of the blockchain resource effort and can pick that % smoothly.

Oreally!  That is interesting.  I only looked at Litecoin early on and have lived under the assumption that they would have the same issues as Bitcoin (or worse) if they achieved the same utilization.  Although there were other things to like about Litecoin, generally I've been to lazy to have paid much attention since my initial scan.  If they are getting serious about scalability and retaining lower end users as a critical component of the network infrastructure I better take another look.

What is the best way to follow your work?



Title: Re: Once again, what about the scalability issue?
Post by: Peter Todd on October 01, 2013, 05:54:08 PM
Oreally!  That is interesting.  I only looked at Litecoin early on and have lived under the assumption that they would have the same issues as Bitcoin (or worse) if they achieved the same utilization.  Although there were other things to like about Litecoin, generally I've been to lazy to have paid much attention since my initial scan.  If they are getting serious about scalability and retaining lower end users as a critical component of the network infrastructure I better take another look.

Warren Togami is the major driver of Litecoin development right now, and he's got very strong feelings about spam and scalability. I don't necessarily always agree with his approach to the issues on a technical level, but his heart is in the right place.

What is the best way to follow your work?

Anything concrete will be posted to the bitcoin-development email list.


Title: Re: Once again, what about the scalability issue?
Post by: Come-from-Beyond on October 16, 2014, 10:22:24 AM
Blockchain size has crossed 10000 MB mark. I think it's time to close this thread until we see 20000 MB...

Sorry for bad timing, I missed the moment when the blockchain was 20000 MB. It's larger than 22000 MB now, could anyone point me to a solution of the problem (if it's implemented)?


Title: Re: Once again, what about the scalability issue?
Post by: franky1 on October 16, 2014, 10:33:38 AM
could anyone point me to a solution of the problem (if it's implemented)?

http://www.dvice.com/sites/dvice/files/styles/blog_post_media/public/sandisk-128gbmicrosd.jpg


Title: Re: Once again, what about the scalability issue?
Post by: SatishMotaNaak1 on October 16, 2014, 10:35:38 AM
Need to advance technologically and managerially.

It's all bureaucracy at the moment.


Title: Re: Once again, what about the scalability issue?
Post by: Come-from-Beyond on October 16, 2014, 11:29:59 AM

This doesn't solve bandwidth problem, blockchain size doubled within a year while connections increased only by 50%. Not sustainable even without increasing 7 TPS limit.

http://connectedhome2go.files.wordpress.com/2008/03/nielsens-law-of-internet-bandwidth.jpg


Title: Re: Once again, what about the scalability issue?
Post by: huoiuu on October 16, 2014, 12:20:43 PM
It seems to me Bitcoin core devs prefer ostrich policy. The blockchain keeps growing, pruning is not implemented yet (is it possible btw?), Gavin spoke about everything except the scalability issue on Bitcoin 2013 conference...
Is there any progress? Or is the game over?

Bitcoin technology develop to the present, progress is very slow, perhaps they pleased with achievement have already achieved, but it is not a good thing, they need to write high quality code!


Title: Re: Once again, what about the scalability issue?
Post by: iluvpie60 on October 16, 2014, 12:37:14 PM
Blockchain size has crossed 10000 MB mark. I think it's time to close this thread until we see 20000 MB...

Sorry for bad timing, I missed the moment when the blockchain was 20000 MB. It's larger than 22000 MB now, could anyone point me to a solution of the problem (if it's implemented)?

Is this a joke or are you serious? Honestly you should try and keep up on the news or searching for an answer.


:S This sounds like it could be a massive show stopper.

maybe searching the forum and seeing that there is a plan of action means the show will continue
https://bitcointalk.org/index.php?topic=816298.0
http://www.coindesk.com/gavin-andresen-bitcoin-hard-fork/


The average person can get bandwidth of 30 mpbs download and 5 mpbs upload for less than 45 dollars per month in some areas of the U.S.

Nodes will be fine as storage becomes cheaper paired with cheaper technology and cheaper and cheaper internet. Pretty soon phone carriers will be going towards 5G as the next thing then 6G and 7 G and whatever. I used to pay 87.99 a month for my 30 mpbs internet and now its a lot cheaper. I also used to pay 79.99 for 15 mbps internet. There is no problem and you need to stop making things up. Point us to an actual problem if you are so convinced there is one. The potential problems you point out are not problems and never were, you just got a bunch of idiots commenting on it because they don't know any better.


/thread


Title: Re: Once again, what about the scalability issue?
Post by: HELP.org on October 16, 2014, 12:44:49 PM
Blockchain size has crossed 10000 MB mark. I think it's time to close this thread until we see 20000 MB...

Sorry for bad timing, I missed the moment when the blockchain was 20000 MB. It's larger than 22000 MB now, could anyone point me to a solution of the problem (if it's implemented)?

Is this a joke or are you serious? Honestly you should try and keep up on the news or searching for an answer.


:S This sounds like it could be a massive show stopper.

maybe searching the forum and seeing that there is a plan of action means the show will continue
https://bitcointalk.org/index.php?topic=816298.0
http://www.coindesk.com/gavin-andresen-bitcoin-hard-fork/


The average person can get bandwidth of 30 mpbs download and 5 mpbs upload for less than 45 dollars per month in some areas of the U.S.

Nodes will be fine as storage becomes cheaper paired with cheaper technology and cheaper and cheaper internet. Pretty soon phone carriers will be going towards 5G as the next thing then 6G and 7 G and whatever. I used to pay 87.99 a month for my 30 mpbs internet and now its a lot cheaper. I also used to pay 79.99 for 15 mbps internet. There is no problem and you need to stop making things up. Point us to an actual problem if you are so convinced there is one. The potential problems you point out are not problems and never were, you just got a bunch of idiots commenting on it because they don't know any better.


/thread

In addition to that these types of issue should be put into a context of the overall risk analysis.  Just pulling out one issue and saying "something should be done" is not the way to manage risks.  A first cut at such a report is found at https://bitcoinfoundation.org/static/2014/04/Bitcoin-Risk-Management-Study-Spring-2014.pdf

I think they left out some risks of the Bitcoin Foundation itself but it is a start.


Title: Re: Once again, what about the scalability issue?
Post by: Come-from-Beyond on October 16, 2014, 02:14:26 PM
Good old BitcoinTalk... Ok, I'll come back when we cross 30 GB mark, maybe you'll have a solution by that time.


Title: Re: Once again, what about the scalability issue?
Post by: R2D221 on October 16, 2014, 04:15:53 PM
Good old BitcoinTalk... Ok, I'll come back when we cross 30 GB mark, maybe you'll have a solution by that time.

You say that as if we're obliged to comply with you. Why don't you come up with something instead of complaining?


Title: Re: Once again, what about the scalability issue?
Post by: Argwai96 on October 17, 2014, 04:13:35 AM
Blockchain size has crossed 10000 MB mark. I think it's time to close this thread until we see 20000 MB...

Sorry for bad timing, I missed the moment when the blockchain was 20000 MB. It's larger than 22000 MB now, could anyone point me to a solution of the problem (if it's implemented)?
There is not a problem. Bandwidth that is available for ~$40 per month is increasing at a faster rate then the blockchain is growing by, the same is true with both hard drive storage and RAM memory.


Title: Re: Once again, what about the scalability issue?
Post by: franky1 on October 17, 2014, 04:21:24 AM
so the issue is internet speeds?

yet i do not hear people complaining that they had to download 15gb for last years call of duty, and 20gb for this years call of duty and more so for the next call of duty via steam.


Title: Re: Once again, what about the scalability issue?
Post by: Come-from-Beyond on October 17, 2014, 07:16:41 AM
There is not a problem.

so the issue is internet speeds?

yet i do not hear people complaining that they had to download 15gb for last years call of duty, and 20gb for this years call of duty and more so for the next call of duty via steam.

Sorry if it wasn't clear enough. The point of this thread is to get a solution to a problem that exists objectively. Not to argue with those who don't agree that the problem does exist.


Title: Re: Once again, what about the scalability issue?
Post by: Pulley3 on October 17, 2014, 09:32:20 AM
so the issue is internet speeds?

yet i do not hear people complaining that they had to download 15gb for last years call of duty, and 20gb for this years call of duty and more so for the next call of duty via steam.

Well, those games entertains them.. It is not the same.


Title: Re: Once again, what about the scalability issue?
Post by: R2D221 on October 17, 2014, 12:31:28 PM
Sorry if it wasn't clear enough. The point of this thread is to get a solution to a problem that exists objectively. Not to argue with those who don't agree that the problem does exist.

Once again, I don't see you proposing any solution to this objective problem.


Title: Re: Once again, what about the scalability issue?
Post by: Come-from-Beyond on October 17, 2014, 02:04:02 PM
Once again, I don't see you proposing any solution to this objective problem.

Does it change anything?


Title: Re: Once again, what about the scalability issue?
Post by: R2D221 on October 17, 2014, 04:43:07 PM
Once again, I don't see you proposing any solution to this objective problem.

Does it change anything?

Of course it does, because you show yourself as an arrogant person, and with that attitude people will be less likely to help.


Title: Re: Once again, what about the scalability issue?
Post by: Come-from-Beyond on October 17, 2014, 05:46:57 PM
Of course it does, because you show yourself as an arrogant person, and with that attitude people will be less likely to help.

What post sounds arrogant?


Title: Re: Once again, what about the scalability issue?
Post by: Velkro on October 17, 2014, 05:51:36 PM
Can anyone explain me recent Gavin Andresen post on bitcoinfoundation page about increasing transaction size? Before he talked on blog about increasing it for scalability, but now im confused, don't understand and this text is veeery long. Its still planned or he don't want to do it anymore.


Title: Re: Once again, what about the scalability issue?
Post by: tl121 on October 17, 2014, 06:54:48 PM
The present block size is at most an inconvenience, not a problem.  And future predictions of larger block sizes being problems are just predictions, not an objective problem today.  A simple analysis of the facts on the ground today shows that the block chain size is at most an inconvenience, not an objective problem.

1. Size of block chain.  
A check of Amazon.com shows that the cost of 1 GB of hard drive storage ranges around $0.04 to $0.06.  Using the higher figure, at the 27 GB block chain size this amounts to a storage cost of less than $2.00.

2. Initial downloading of block chain.
My Internet connection is a typical rural DSL sevice, with about 15 mbps download speed. Allowing for protocol overhead, call it 1.5 MBPS. This means transmission time downloading is about 5 hours if downloading from a fast node (or from multiple slow nodes running the bootstrap.dat torrent).

3. Initial verification of block chain.
My 2 year old core i5 took about 18 hours to do this. Actually, I cloned the entire block chain across my LAN from another machine in this time.  

This situation strikes me as an inconvenience, not an objective problem.

There presently is a problem, but it's related to the inefficient way that bitcoin core initially acquires the block chain. This is a matter of downloading speed, not blockchain size.  The problem arises because the client downloads from only one full node at a time and makes no attempt to find fast nodes. As a result, the downloader may see a slow speed.  When I was running a full node, I had to limit my upload speed to 50 KBps so I could continue to use my network for other purposes.  That's because I have a slow DSL upload speed.  It takes about 6 days to upload the entire block chain at this speed.  Taking 6 days to download the block chain is definitely a problem, and restarting it in the hopes of finding a faster node for the download is perhaps an even worse problem, since it requires attention from the newbie. However, there is another way around this and that is to use the torrent.  This will enable downloading at one's full download Internet bandwidth.


Title: Re: Once again, what about the scalability issue?
Post by: Come-from-Beyond on October 17, 2014, 07:47:17 PM
The present block size is at most an inconvenience, not a problem.  And future predictions of larger block sizes being problems are just predictions, not an objective problem today.  A simple analysis of the facts on the ground today shows that the block chain size is at most an inconvenience, not an objective problem.

1. Size of block chain.  
A check of Amazon.com shows that the cost of 1 GB of hard drive storage ranges around $0.04 to $0.06.  Using the higher figure, at the 27 GB block chain size this amounts to a storage cost of less than $2.00.

2. Initial downloading of block chain.
My Internet connection is a typical rural DSL sevice, with about 15 mbps download speed. Allowing for protocol overhead, call it 1.5 MBPS. This means transmission time downloading is about 5 hours if downloading from a fast node (or from multiple slow nodes running the bootstrap.dat torrent).

3. Initial verification of block chain.
My 2 year old core i5 took about 18 hours to do this. Actually, I cloned the entire block chain across my LAN from another machine in this time.  

This situation strikes me as an inconvenience, not an objective problem.

There presently is a problem, but it's related to the inefficient way that bitcoin core initially acquires the block chain. This is a matter of downloading speed, not blockchain size.  The problem arises because the client downloads from only one full node at a time and makes no attempt to find fast nodes. As a result, the downloader may see a slow speed.  When I was running a full node, I had to limit my upload speed to 50 KBps so I could continue to use my network for other purposes.  That's because I have a slow DSL upload speed.  It takes about 6 days to upload the entire block chain at this speed.  Taking 6 days to download the block chain is definitely a problem, and restarting it in the hopes of finding a faster node for the download is perhaps an even worse problem, since it requires attention from the newbie. However, there is another way around this and that is to use the torrent.  This will enable downloading at one's full download Internet bandwidth.


Sorry if it wasn't clear enough. The point of this thread is to get a solution to a problem that exists objectively. Not to argue with those who don't agree that the problem does exist.


Title: Re: Once again, what about the scalability issue?
Post by: R2D221 on October 17, 2014, 09:15:14 PM
Of course it does, because you show yourself as an arrogant person, and with that attitude people will be less likely to help.

What post sounds arrogant?

Good old BitcoinTalk... Ok, I'll come back when we cross 30 GB mark, maybe you'll have a solution by that time.


Title: Re: Once again, what about the scalability issue?
Post by: Come-from-Beyond on October 17, 2014, 10:27:49 PM
What post sounds arrogant?

Good old BitcoinTalk... Ok, I'll come back when we cross 30 GB mark, maybe you'll have a solution by that time.

Read that post as "Bitcoin zealots don't want to face the reality, ok, maybe 30 GB will make some of them to accept the reality." It's not arrogant, there is just no point to waste time on fanatics.


Title: Re: Once again, what about the scalability issue?
Post by: Minecache on October 17, 2014, 11:16:58 PM
Blockchain size and the confirmation times are the most critical issues bitcoin needs to resolve today. If the problem is confirmation time then I'd request all bitcoin servers increase their system memory to at least 8Gb if you want to host. Then increase hard disk space to cope with the increased blockchin size.


Title: Re: Once again, what about the scalability issue?
Post by: masyveonk on October 17, 2014, 11:46:06 PM
There presently is a problem, but it's related to the inefficient way that bitcoin core initially acquires the block chain. This is a matter of downloading speed, not blockchain size.  The problem arises because the client downloads from only one full node at a time and makes no attempt to find fast nodes. As a result, the downloader may see a slow speed.  When I was running a full node, I had to limit my upload speed to 50 KBps so I could continue to use my network for other purposes.  That's because I have a slow DSL upload speed.  It takes about 6 days to upload the entire block chain at this speed.  Taking 6 days to download the block chain is definitely a problem, and restarting it in the hopes of finding a faster node for the download is perhaps an even worse problem, since it requires attention from the newbie. However, there is another way around this and that is to use the torrent.  This will enable downloading at one's full download Internet bandwidth.


I have to agree, sometimes the progress stops because your connected only to very slow nodes (or nodes not sending blocks). When this happens, to speed things up I had to disconnect from internet and reconnect again. It then finds new nodes.

Downloading and storing 30GB of data is no problem today, just lil more inteligent downloading is necessary


Title: Re: Once again, what about the scalability issue?
Post by: tl121 on October 18, 2014, 01:53:35 AM

Sorry if it wasn't clear enough. The point of this thread is to get a solution to a problem that exists objectively. Not to argue with those who don't agree that the problem does exist.

You have failed to articulate an "objective problem".  If you had, then you would not be getting the objections you have been getting or you would be making intelligent responses to those objections.  If one wants to get a problem solved, the first step, and usually the most important step, is to make a precise statement of the problem, including a detailed analysis of why it is a problem.  You have not done this.

Please stop using words like "objective" which are nothing but a thinly veiled put down of people who have different views.  This type of argumentation  is not a way to get something done unless one is the "boss" in an authoritarian pyramid, which you are not.



Title: Re: Once again, what about the scalability issue?
Post by: Come-from-Beyond on October 18, 2014, 08:28:00 AM
You have failed to articulate an "objective problem".

English was the 3rd language I learned. Could you paraphrase this, plz?


Title: Re: Once again, what about the scalability issue?
Post by: lovely89 on October 18, 2014, 08:39:50 AM
You have failed to articulate an "objective problem".

English was the 3rd language I learned. Could you paraphrase this, plz?

He wants you to clearly identify the issue and an analysis as to why it is a issue.

E.g. The issue is that the blockchain is growing exponentially with bitcoin growth. This is an issue because the average (even above average) user doesn't have internet bandwidth and hdd capacity that grows at the same rate and therefore there will be(and for some, already is) a scalability issue.


Title: Re: Once again, what about the scalability issue?
Post by: Come-from-Beyond on October 18, 2014, 08:56:29 AM
He wants you to clearly identify the issue and an analysis as to why it is a issue.

The problem is well-known (https://bitcoinfoundation.org/2014/10/a-scalability-roadmap/), no need to waste bytes and repeat the same words.


Title: Re: Once again, what about the scalability issue?
Post by: R2D221 on October 18, 2014, 03:54:21 PM
He wants you to clearly identify the issue and an analysis as to why it is a issue.

The problem is well-known (https://bitcoinfoundation.org/2014/10/a-scalability-roadmap/), no need to waste bytes and repeat the same words.

You link to a blog where they state they are already working on a solution, yet you still complain nobody thinks on a solution.

Also, it's not “wasting bytes”, it's giving references to your claims.


Title: Re: Once again, what about the scalability issue?
Post by: tl121 on October 18, 2014, 04:25:54 PM
You have failed to articulate an "objective problem".

English was the 3rd language I learned. Could you paraphrase this, plz?

He wants you to clearly identify the issue and an analysis as to why it is a issue.

E.g. The issue is that the blockchain is growing exponentially with bitcoin growth. This is an issue because the average (even above average) user doesn't have internet bandwidth and hdd capacity that grows at the same rate and therefore there will be(and for some, already is) a scalability issue.

I will add more.  The problem statement needs to include numbers, not just "too big".  And the problem statement needs to include specific assumptions, e.g. the average size of a transaction and the expected volume of bitcoin transactions, forecasted performance and cost figures for computer processors and memory, network bandwidth, etc...  There is nothing "objective" in saying that the block chain is too large and is growing exponentially.  This may be true, but it is not a useful statement when it comes to evaluating proposed engineering solutions.

Now if anyone puts some assumptions down, expect debate as to the numbers.  And if you show columns of numbers, expect people to complain if they don't add up or if they are adding apples to oranges.  This is the way engineering progress is made. (BTW I was a very senior level engineering manager in a large computer company some while back, so I know how these arguments go.  I also know how they must go if there is to be progress.)





Title: Re: Once again, what about the scalability issue?
Post by: Minecache on October 18, 2014, 04:29:12 PM
This thread appears to be more focused on discussing the point of discussing threads.


Title: Re: Once again, what about the scalability issue?
Post by: FattyMcButterpants on October 18, 2014, 05:25:07 PM
He wants you to clearly identify the issue and an analysis as to why it is a issue.

The problem is well-known (https://bitcoinfoundation.org/2014/10/a-scalability-roadmap/), no need to waste bytes and repeat the same words.
The same issue exists with every other scam coin that is out there. There is no scalability issue now as TXs are not being rejected because there are too many TXs for the nodes/miners to handle.

The block size (max) is being discussed to be increased which would resolve this issue.


Title: Re: Once again, what about the scalability issue?
Post by: Come-from-Beyond on October 18, 2014, 05:46:54 PM
You link to a blog where they state they are already working on a solution, yet you still complain nobody thinks on a solution.

Also, it's not “wasting bytes”, it's giving references to your claims.

They have been "working" on a solution since 2009.

The problem of ever-growing blockchain is a common knowledge (http://en.wikipedia.org/wiki/Common_knowledge). I don't see a point to add anything to the description.


Title: Re: Once again, what about the scalability issue?
Post by: FattyMcButterpants on October 18, 2014, 05:48:53 PM
You link to a blog where they state they are already working on a solution, yet you still complain nobody thinks on a solution.

Also, it's not “wasting bytes”, it's giving references to your claims.

They have been "working" on a solution since 2009.

The problem of ever-growing blockchain is a common knowledge (http://en.wikipedia.org/wiki/Common_knowledge).
Wikipedia is not a credible source as anyone can make an edit to a wikipedia entry. As of now scalability is not an issue and will not be for some time. Therefore the devs have some time to come up with a solution (likely increase block size).


Title: Re: Once again, what about the scalability issue?
Post by: turvarya on October 18, 2014, 05:57:15 PM
Fatty has a point. For now it is just a academical problem without even a forecast, when it will become a real problem.
Doesn't seem like it will become a real problem next year and they are already working on a solution.


Title: Re: Once again, what about the scalability issue?
Post by: Come-from-Beyond on October 18, 2014, 07:46:39 PM
Wikipedia is not a credible source as anyone can make an edit to a wikipedia entry.

You are wrong. It's moderator by other users and you can't post anything you wish. You must provide references that will be checked by others.


Title: Re: Once again, what about the scalability issue?
Post by: R2D221 on October 18, 2014, 08:19:28 PM
The problem of ever-growing blockchain is a common knowledge (http://en.wikipedia.org/wiki/Common_knowledge). I don't see a point to add anything to the description.

If you want a solution, the problem must be specific. You can't rely on common knowledge to propose a solution, because although it's common, it might be understood differently by each person. And then conflicts emerge.


Title: Re: Once again, what about the scalability issue?
Post by: turvarya on October 18, 2014, 08:24:15 PM

Wikipedia is not a credible source as anyone can make an edit to a wikipedia entry.

You are wrong. It's moderator by other users and you can't post anything you wish. You must provide references that will be checked by others.
I like Wikipedia, but it is not really a reliable source. You can just take any crap news site or blog as a reference.
A lot of people just write something themself and than put it on wikipedia, with them self as a reference


Title: Re: Once again, what about the scalability issue?
Post by: mnmShadyBTC on October 18, 2014, 09:00:51 PM
Wikipedia is not a credible source as anyone can make an edit to a wikipedia entry.

You are wrong. It's moderator by other users and you can't post anything you wish. You must provide references that will be checked by others.
Not true. It is moderated to a point but only that the sources are there and the style of writing is correct. In the event that something breaks the rules then only a warning will be put on the post and a suggestion for the post to be edited. The moderators will not actually edit the content or censor the content that is not accurate.

EDIT: it is a good resource to find resources that are related to what you are trying to research but you should never use wikipedia as a credible source as it is not one


Title: Re: Once again, what about the scalability issue?
Post by: Come-from-Beyond on October 19, 2014, 08:14:03 AM
What credibility of Wikipedia has to do with the discussion? It doesn't change the fact that the problem of bloated blockchain is well-known.


Title: Re: Once again, what about the scalability issue?
Post by: Window2Wall on October 19, 2014, 08:30:45 PM
What credibility of Wikipedia has to do with the discussion? It doesn't change the fact that the problem of bloated blockchain is well-known.
It is not an issue today as the blocks that are being mined are still well below the max block size of 1 MB. The solution is now much closer then it has been in the past as a hard fork is being proposed that would have the block size increase over time.

Anyone who is "bloating" the blockchain with unnecessary spam is paying for the privilege of putting such spam on the blockchain (via TX fees) so people will have disincentives to do so, and when they do there is no real damage to the blockchain


Title: Re: Once again, what about the scalability issue?
Post by: jabo38 on October 20, 2014, 03:01:49 AM
I am a firm believer that bitcoin is facing three challenges.  1. blockchain bloat, 2. low transaction rate per second, and 3. a mining system growing into something it wasn't ever designed to be. 

The OP has brought up a serious question.  Blowing him off doesn't fix the problem.  I would love to see Bitcoin succeed and would really like to see a definitive road map here of how bloat is going to be solved. 

What I have read so far in responses is that

A) it is no big deal because as the blockchain grows, so will computers ability to handle it

B) there is a way to trim it but it hasn't been implemented

I don't know enough on the technical side to know if either of these are true, but I know they can't both be true. 

One answer says there is no problem, the other says there is a problem but that the problem is not that big of a deal.

If the "A" scenerio is true, what happens if computers don't grow with the ability, what if the growth of Bitcoin explodes next year, what is plan B then?

If the "B" scenerio is true, why hasn't it been implemented yet, or at least some kind of alpha is in the works that can be built upon?  It has been many years. 


Title: Re: Once again, what about the scalability issue?
Post by: turvarya on October 20, 2014, 06:54:23 AM
I am a firm believer that bitcoin is facing three challenges.  1. blockchain bloat, 2. low transaction rate per second, and 3. a mining system growing into something it wasn't ever designed to be. 

The OP has brought up a serious question.  Blowing him off doesn't fix the problem.  I would love to see Bitcoin succeed and would really like to see a definitive road map here of how bloat is going to be solved. 

What I have read so far in responses is that

A) it is no big deal because as the blockchain grows, so will computers ability to handle it

B) there is a way to trim it but it hasn't been implemented

I don't know enough on the technical side to know if either of these are true, but I know they can't both be true. 

One answer says there is no problem, the other says there is a problem but that the problem is not that big of a deal.

If the "A" scenerio is true, what happens if computers don't grow with the ability, what if the growth of Bitcoin explodes next year, what is plan B then?

If the "B" scenerio is true, why hasn't it been implemented yet, or at least some kind of alpha is in the works that can be built upon?  It has been many years. 
You got it wrong. There is a problem, but it is not a real problem(since computer power is far ahead), yet. So, plenty of time to fix it and they(Gavin) are currently working on it.
Like I stated before, there is not even a forecast, when it will become a real problem, which just proofs we have plenty of time.


Title: Re: Once again, what about the scalability issue?
Post by: Daedelus on October 20, 2014, 12:39:34 PM
Blockchain size has crossed 10000 MB mark. I think it's time to close this thread until we see 20000 MB...

Sorry for bad timing, I missed the moment when the blockchain was 20000 MB. It's larger than 22000 MB now, could anyone point me to a solution of the problem (if it's implemented)?
There is not a problem. Bandwidth that is available for ~$40 per month is increasing at a faster rate then the blockchain is growing by, the same is true with both hard drive storage and RAM memory.


Can't someone devise a system where costs aren't planned to escalate continuously into the future? One that is cheap today and always will be?...


I don't pay $40 for internet and I'd prefer not to keep having to upgrade my computer every couple of years for no discernible benefit... I have had enough of that in Windoze...


Title: Re: Once again, what about the scalability issue?
Post by: herzmeister on October 20, 2014, 04:00:25 PM

Can't someone devise a system where costs aren't planned to escalate continuously into the future? One that is cheap today and always will be?...


https://en.wikipedia.org/wiki/MaidSafe


Title: Re: Once again, what about the scalability issue?
Post by: Daedelus on January 18, 2015, 04:32:36 PM

Can't someone devise a system where costs aren't planned to escalate continuously into the future? One that is cheap today and always will be?...


https://en.wikipedia.org/wiki/MaidSafe


I was jerking around, hands on cheeks damsel in distress style  :D I already knew of a platform where costs would not increase significantly as the network grows.


Title: Re: Once again, what about the scalability issue?
Post by: Come-from-Beyond on March 11, 2015, 06:36:44 PM
Bitcoin blockchain size is 30 GB. I've heard that Bitcoin 0.10 got improved blockchain downloading. How many times is it faster now?


Title: Re: Once again, what about the scalability issue?
Post by: manselr on March 11, 2015, 06:53:38 PM
Bitcoin blockchain size is 30 GB. I've heard that Bitcoin 0.10 got improved blockchain downloading. How many times is it faster now?
I had to download the entire blockchain again because as I updated from 9.3 or whatever the previous version was it got corrupted. It was much faster, much many connected peers and the connections happened quicker. Last time it took me like 2 days, this time I managed to do it in 6 hours or something along the lines.


Title: Re: Once again, what about the scalability issue?
Post by: Come-from-Beyond on March 11, 2015, 07:06:52 PM
I had to download the entire blockchain again because as I updated from 9.3 or whatever the previous version was it got corrupted. It was much faster, much many connected peers and the connections happened quicker. Last time it took me like 2 days, this time I managed to do it in 6 hours or something along the lines.

So it's like 10x speed up, a good moment for 20 MiB blocks.


Title: Re: Once again, what about the scalability issue?
Post by: ChuckBuck on March 11, 2015, 08:07:21 PM
Looks like Gavin and the core devs will address scalability in June:

http://insidebitcoins.com/news/gavin-andresen-optimistic-about-scaling-bitcoin/30652

Fork away fellas!


Title: Re: Once again, what about the scalability issue?
Post by: turvarya on March 12, 2015, 05:32:16 PM
Bitcoin blockchain size is 30 GB. I've heard that Bitcoin 0.10 got improved blockchain downloading. How many times is it faster now?
I had to download the entire blockchain again because as I updated from 9.3 or whatever the previous version was it got corrupted. It was much faster, much many connected peers and the connections happened quicker. Last time it took me like 2 days, this time I managed to do it in 6 hours or something along the lines.
are you sure about that?
I just had to index the whole thing again, not download it, which took less than a day on my PC


Title: Re: Once again, what about the scalability issue?
Post by: ChuckBuck on March 12, 2015, 06:59:01 PM
Bitcoin blockchain size is 30 GB. I've heard that Bitcoin 0.10 got improved blockchain downloading. How many times is it faster now?
I had to download the entire blockchain again because as I updated from 9.3 or whatever the previous version was it got corrupted. It was much faster, much many connected peers and the connections happened quicker. Last time it took me like 2 days, this time I managed to do it in 6 hours or something along the lines.
are you sure about that?
I just had to index the whole thing again, not download it, which took less than a day on my PC

I had a similar issue to manselr, I had 0.9.1 I believe.  Did the install correctly, it launched, then there was some sort of error message and the only option pretty much to continue is download the entire blockchain from block 0.

Since I don't have an SSD Hard drive on my laptop, it took me like 12-14 hours total to redownload.  Better than the 2-3 days typical time, but still a long time overall.


Title: Re: Once again, what about the scalability issue?
Post by: Daedelus on April 15, 2015, 10:24:31 AM
Are there still plans for pruning? Is it technically possible in BTC?


Title: Re: Once again, what about the scalability issue?
Post by: BillyBobZorton on April 15, 2015, 01:22:19 PM
Apparently, Satoshi predicted this and his reply was: "By the time BTC is mainstream, technology will be advanced enough for 1MB not being a problem".