ATguy
|
|
January 28, 2016, 04:26:26 PM Last edit: January 28, 2016, 04:43:04 PM by ATguy |
|
Because not all blocks can be full, we are looking for 0.5 TB in next 6 years when users will probably have 10-100 TB HDDs in 6 years. If someone really think a bit technically about it and dont spread the FUD about decentralization and how Bitcoin cant scale anyway with onchain transanctions, he would come to conclusion let scale onchain transanctions as much as possible to still keep decentralization - 0.5 TB HDD storage in next 6 years will not break decentralization, even 1TB or 2 TB in next 6 years will not break decentralization either . This is "FUD". You're working under the assumption that: 1. The block size will remain at 2 MB for 6 years (while saying that 1 MB hurts adoption). 2. The HDD capacity is going to increase tenfold in 6 years? 3. This doesn't hurt decentralization. You obviously are not thinking properly because there are a lot of factors to consider. What about new nodes? Good luck catching up with a 0.63 TB network on a raspberry PI. This is something that can not be left out (among other things). In other words, this does hurt decentralization and that is a fact. The question is just how much and is it negligible? 1. It can safely be adjusted up to 8 MB when needed and still only maximum 2TB of data every 6 years. The point was as long as user natural HDD capacity is increased over time, so can be blocksize limit to be increased to keep the same level of decentralization. 2. I have 2 TB storage but I guess average might be well above 1 TB today. At least 5x increase is reasonable in 6 years 3. Rasperi PIs are dead path for the most successfull crypto imo. You dont release best games with minimum system requirements so you can please everybody. You stick to something like top 75% of user PC market and only this way you can release top product played by most. Releasing India game with basically no minimum system requirements will not give you a better seller...
|
|
|
|
sAt0sHiFanClub
|
|
January 28, 2016, 04:37:55 PM |
|
The 21 Bitcoin PiTato is the first PiTato with native hardware and software support for the Bitcoin protocol. I dont know what it is, but I feel that a PiTato should really be a thing.
|
We must make money worse as a commodity if we wish to make it better as a medium of exchange
|
|
|
bargainbin
|
|
January 28, 2016, 04:54:37 PM |
|
The 21 Bitcoin PiTato is the first PiTato with native hardware and software support for the Bitcoin protocol. I dont know what it is, but I feel that a PiTato should really be a thing. That's what somebody called the 21inc Bitcoin Computer, and now I can't think of it as anything else
|
|
|
|
ATguy
|
|
January 28, 2016, 05:20:39 PM |
|
Miners don't currently mine empty blocks in hopes of keeping the blockchain trim & saving some HD space. Care to guess why they do it? Care to guess how likely that shit is to become pandemic as the blocksize is raised?
When miners receive new block they start working on empty block so they are guaranteed to have valid block. When they finally validate the received block they can add transactions and mine for nonempty blocks. It might take about 30 seconds to validate which fits 5% of empty blocks. I see where you aiming, if bigger blocks are received then it takes longer to validate thus higher % of empty blocks. But there is work in progress where miners could anounce the block with txs they are mining on, or adding their own txs which noone else can known about (like payments to its users), or naturally increased CPU processing speed over time, or another possibility is dedicated ASIC hardware for faster validation if fees become really important and other options fail. But it is only worries for miners who are expected to have better hardware than regular nodes who are ok even with medium system hardware specs when the block size limit is increased.
|
|
|
|
Lauda
Legendary
Offline
Activity: 2674
Merit: 3000
Terminated.
|
|
January 28, 2016, 06:37:18 PM Last edit: January 28, 2016, 07:28:15 PM by Lauda |
|
To be clear I do not think we should restrict the throughput of the network so that people can continue to run full nodes on raspberry PI's. I do not think this was ever the intention for Bitcoin either. It is true that there is a balancing act here, and that decentralization is effected in different ways. However everything considered I think that increasing the blocksize will be better for decentralization overall compared to leaving it at one megabyte.
You guys obviously have no clue as to what an 'example' is. I could care less about the Raspberry PI, but I know a lot of people using those to either mine or run nodes. 2. I have 2 TB storage but I guess average might be well above 1 TB today. At least 5x increase is reasonable in 6 years
Unless new technology comes up, no it isn't. 3. Rasperi PIs are dead path for the most successfull crypto imo. You dont release best games with minimum system requirements so you can please everybody. You stick to something like top 75% of user PC market and only this way you can release top product played by most. Releasing India game with basically no minimum system requirements will not give you a better seller...
You guys seem to lack some logic though. Who cares about the Raspberry in particular; take any model of a processor that you want. Where is the limit? This was discussed at a workshop IIRC; new nodes being unable to ever catch up to the network. You can't deny it now though, essentially you would push X amount of people with systems that are unable to handle the network anymore -> smaller number of nodes -> decentralization harmed. As previously said, this is definitely going to happen the question is if it is going to be negligible.
|
"The Times 03/Jan/2009 Chancellor on brink of second bailout for banks" 😼 Bitcoin Core ( onion)
|
|
|
johnyj
Legendary
Offline
Activity: 1988
Merit: 1012
Beyond Imagination
|
|
January 28, 2016, 07:28:00 PM |
|
I think any normal people will just go for the first option, only geeks and technical interested guy will try the second approach, and eventually many of them will give up on the second setup because it is just too complex to implement and maintain, and a Raid 0 will have higher risk of failure, it does not worth the effort
RAID 0 does not have a high risk of failure and it is usually used for much better performance not endurance/storage and thus this analogy is wrong. Let's move on. I use this example because it has proven history: After so many years of Raid technology appearance, most of the people are still not running Raid, and when they run, they typically run Raid 1 or Raid 10 in data centers to have more data safety. This is all because raised level of complexity caused so many compatibility/driver problems which do not exist for a single hard drive setup, so that the benefit of increased speed does not worth the effort of setting up and maintain a RAID It also shows a strong philosophy when it comes to future scalability: Simple solutions tends to survive long term wise. It is very easy to expand the capacity of a simple system by simply adding more of it, because you don't change the behavior of existing system and all the systems that are dependent on it can work as usual
|
|
|
|
Lauda
Legendary
Offline
Activity: 2674
Merit: 3000
Terminated.
|
|
January 28, 2016, 07:32:17 PM |
|
It also shows a strong philosophy when it comes to future scalability: Simple solutions tends to survive long term wise. It is very easy to expand the capacity of a simple system by simply adding more of it, because you don't change the behavior of existing system and all the systems that are dependent on it can work as usual
It doesn't change the behavior of the system? This can only be said for segwit until everyone is using it, after that it is the system. However, I would argue that this 'simple' solution has a broader effect unlike what many claim. Does it not push away some existing nodes and potential new nodes due to the increased requirements?
|
"The Times 03/Jan/2009 Chancellor on brink of second bailout for banks" 😼 Bitcoin Core ( onion)
|
|
|
sgbett
Legendary
Offline
Activity: 2576
Merit: 1087
|
|
January 28, 2016, 08:36:28 PM |
|
Anecdotally, I just switched plans with my ISP. Signed up for their new "Terabyte" service. Not a bad service for some backwater village in the UK...
|
"A purely peer-to-peer version of electronic cash would allow online payments to be sent directly from one party to another without going through a financial institution" - Satoshi Nakamoto*my posts are not investment advice*
|
|
|
ATguy
|
|
January 28, 2016, 09:02:01 PM Last edit: January 28, 2016, 09:20:08 PM by ATguy |
|
2. I have 2 TB storage but I guess average might be well above 1 TB today. At least 5x increase is reasonable in 6 years
Unless new technology comes up, no it isn't. If you prove we will not see over 10 TB HDDs in mass production in the following 6 years, then you can get some big blockers more sceptical about scaling above 2 or 4 MB. So go ahead. 3. Rasperi PIs are dead path for the most successfull crypto imo. You dont release best games with minimum system requirements so you can please everybody. You stick to something like top 75% of user PC market and only this way you can release top product played by most. Releasing India game with basically no minimum system requirements will not give you a better seller...
You guys seem to lack some logic though. Who cares about the Raspberry in particular; take any model of a processor that you want. Where is the limit? This was discussed at a workshop IIRC; new nodes being unable to ever catch up to the network. You can't deny it now though, essentially you would push X amount of people with systems that are unable to handle the network anymore -> smaller number of nodes -> decentralization harmed. As previously said, this is definitely going to happen the question is if it is going to be negligible. Nodes with average system specs catching up pretty nicely (4 core 2 GHz CPU), until checkpoint it is never issue there is network speed biggest botteneck. Only about the last about half year takes most time because only then all transaction signatures are checked. Fortunately today standard 4 core CPU rig can use all 4 cores to catch up in few days at most. Obviously in future number of cores average rig will have is going to increase, and Bitcoin devs can put the new checkpoints in every minor version (compared to major version today) so you will need to really spend most time on last about 2 months of syncing only and not about half a year as of now... pretty easy fix helping scalability for new nodes syncing from 0
|
|
|
|
Fatman3001
Legendary
Offline
Activity: 1554
Merit: 1014
Make Bitcoin glow with ENIAC
|
|
January 28, 2016, 11:48:09 PM |
|
To be clear I do not think we should restrict the throughput of the network so that people can continue to run full nodes on raspberry PI's. I do not think this was ever the intention for Bitcoin either. It is true that there is a balancing act here, and that decentralization is effected in different ways. However everything considered I think that increasing the blocksize will be better for decentralization overall compared to leaving it at one megabyte.
You guys obviously have no clue as to what an 'example' is. I could care less about the Raspberry PI, but I know a lot of people using those to either mine or run nodes. Sure. I have literally hundreds of Raspberrys and Beagle Boards in my mine. But do you know what they are for and why they're completely irrelevant to this debate? Are you just writing stuff to shut us up? ...Who cares about the Raspberry in particular...
You do. And you seem to think everyone should run a full node. In the name of decentralization. I really think you misunderstand how decentralization works. You cannot expect that every user should actively maintain the network. It's much more important that those who do maintain it understands what it takes and accepts that burden. A person who can't set up their network properly, who can't really understand the difference between a full node and a light client, and gets a crappy experience because the node is using too much bandwidth or is deteriorating the wifi, should never run a full node. You need quality HW and at least semi-qualified people. And to really add to decentralization they should keep an eye on the behavior of the node. Those weirdos who's just sitting there with a beer looking at transactions, they add more to decentralization than 10 "dumb" nodes.
|
"I predict the Internet will soon go spectacularly supernova and in 1996 catastrophically collapse." - Robert Metcalfe, 1995
|
|
|
iCEBREAKER
Legendary
Offline
Activity: 2156
Merit: 1072
Crypto is the separation of Power and State.
|
|
January 29, 2016, 01:33:55 AM |
|
Assuming Bitcoin adoption is poor & we don't need to bump blocksize again. 2MB gives us what, ~6tps, and we're ~2 - 3tps now? Yeah, it essentially doubles throughput. This will make a huge practical difference in terms of adoption over the next few years. Oh, I agree that it's better than nothing, but in a futile sort of way. I mean, if you don't count on Bitcoin's userbase >doubling in 6 years... The Toomininstas are confronting the same problem the Gavinistas did, which is that multiplying a tiny number such as 3tps by another tiny (ie sane) number such as 2 or 4 or even 8 still only produces another tiny number such as 6tps or 12tps or 24tps. You can't get to Visa tps from here. Our only realistic path to Visa is orthogonal scaling, where each tx does the maximum economic work possible. Core and Blockstream are carefully and meticulously preparing Bitcoin for such scaling, via sidechains, Lightning, CLTV, RBF, etc. Meanwhile Tooministas fret and moan about "Are We There Yet?" and "But Daddy I Want It Now." As if a "not much testing needed" 2MB jump is not worse than useless...
|
██████████ ██████████████████ ██████████████████████ ██████████████████████████ ████████████████████████████ ██████████████████████████████ ████████████████████████████████ ████████████████████████████████ ██████████████████████████████████ ██████████████████████████████████ ██████████████████████████████████ ██████████████████████████████████ ██████████████████████████████████ ████████████████████████████████ ██████████████ ██████████████ ████████████████████████████ ██████████████████████████ ██████████████████████ ██████████████████ ██████████ Monero
|
| "The difference between bad and well-developed digital cash will determine whether we have a dictatorship or a real democracy." David Chaum 1996 "Fungibility provides privacy as a side effect." Adam Back 2014
|
| | |
|
|
|
johnyj
Legendary
Offline
Activity: 1988
Merit: 1012
Beyond Imagination
|
|
January 29, 2016, 01:47:10 AM |
|
It also shows a strong philosophy when it comes to future scalability: Simple solutions tends to survive long term wise. It is very easy to expand the capacity of a simple system by simply adding more of it, because you don't change the behavior of existing system and all the systems that are dependent on it can work as usual
It doesn't change the behavior of the system? This can only be said for segwit until everyone is using it, after that it is the system. However, I would argue that this 'simple' solution has a broader effect unlike what many claim. Does it not push away some existing nodes and potential new nodes due to the increased requirements? The thinking behind RAID 0 is to split data between two disks so that data can transfer parallel thus reach double throughput, it is a change of the behavior. But as a means to reach higher throughput, it is short lived. We all know that the rise of SSD has replaced most of the RAID 0 setup quickly since they works exactly as a single hard drive but provide magnitudes higher throughput Similarly, the hard drive will move to a SSD based route and reach PB level storage with ease, cost is not a problem when it is mass produced. As to CPU limitation, special ASICs can be developed to accelerate the verification process, so that a 1GB block can be processed in seconds The only thing left is the network bandwidth, which is the current bottleneck. Segwit will not help in that regards since it brings more data given same amount of transaction. But we are far from reaching the theoretical limitation of fiber networks communication yet. Currently it is already Pbs level in lab. With the new trend of 4K content streaming, especially 4K live broadcast, it requires enormous amount of bandwidth. If average household want to see 4k live broadcast, they must get 1Gbps level optic fiber, and I think it will happen in latest 10 years All these things seems to be able to scale indefinitely because they simply add more to its existing capacity, but the way they inter-operates never changed, so that component manufacturer can focus on capacity increase instead of worrying about losing backward compatibility
|
|
|
|
iCEBREAKER
Legendary
Offline
Activity: 2156
Merit: 1072
Crypto is the separation of Power and State.
|
|
January 29, 2016, 02:19:59 AM |
|
users will probably have 10-100 TB HDDs in 6 years.
Why Pollyanna, what a nice rosy prediction you have there. If you want to make an altcoin based on hoping for the best, GLWT. But here at Bitcoin, we plan for the worst. We are building high-powered money capable of surviving world wars and global depressions, not a gimmicky fancy Visa to give you leverage over the latest fiat bubble.
|
██████████ ██████████████████ ██████████████████████ ██████████████████████████ ████████████████████████████ ██████████████████████████████ ████████████████████████████████ ████████████████████████████████ ██████████████████████████████████ ██████████████████████████████████ ██████████████████████████████████ ██████████████████████████████████ ██████████████████████████████████ ████████████████████████████████ ██████████████ ██████████████ ████████████████████████████ ██████████████████████████ ██████████████████████ ██████████████████ ██████████ Monero
|
| "The difference between bad and well-developed digital cash will determine whether we have a dictatorship or a real democracy." David Chaum 1996 "Fungibility provides privacy as a side effect." Adam Back 2014
|
| | |
|
|
|
2112
Legendary
Offline
Activity: 2128
Merit: 1073
|
|
January 29, 2016, 04:27:02 AM |
|
Anecdotally, I just switched plans with my ISP. Signed up for their new "Terabyte" service. Not a bad service for some backwater village in the UK... If you don't mind translating from British English to the International English: No option without this and no calls allowed.
What does "no calls" mean in the UK? Does this mean that: 1) they won't negotiate the requirement of analog phone line (called POTS elsewhere); 2) that the analog phone line will not allow incoming and outgoing calls made with analog phone and will be only used to provision VDSL2 signal to the box at your home. In other words: is your analog telephone still connected to the copper wires coming to your home (possibly through a DSL splitter/filter) or did they force you to plug in your analog phone into a dedicated socket in the VDSL2 modem box? Or maybe in still different way: if the electricity fails in your home or VDSL box is inoperable, would you still be able to make calls (e.g. 112 emergency) with our analog phone powered from the battery in the central phone switching office? Thanks in advance.
|
|
|
|
Lauda
Legendary
Offline
Activity: 2674
Merit: 3000
Terminated.
|
|
January 29, 2016, 10:27:37 AM |
|
If you prove we will not see over 10 TB HDDs in mass production in the following 6 years, then you can get some big blockers more sceptical about scaling above 2 or 4 MB. So go ahead.
You said 10-100TB, which implied a tenfold increase in 6 years. This is not likely. so you will need to really spend most time on last about 2 months of syncing only and not about half a year as of now... pretty easy fix helping scalability for new nodes syncing from 0
Your solution is checkpoints? But do you know what they are for and why they're completely irrelevant to this debate? Are you just writing stuff to shut us up?
They are relevant and no I'm not. I'd like proper feedback. If someone wants to deploy multiple nodes and doesn't have a big budget aren't PI's and similar computers their best option? Regardless, I'm asking where the cut-off point is? What 'minimum sys. requirements' should full nodes have? And you seem to think everyone should run a full node. In the name of decentralization.
Wrong conclusions. I think that anyone who wishes to run a full node should be able to do so relatively inexpensively (e.g. not costing the user a fortune) and easily (setting up). I do not think that more people should be pushed into SPV wallets because of some limitation in their region (internet as an example). They should be allowed to choose. You are probably going to improperly understand this again. I do not mean that I'm against 2 MB blocks. I've been actually advocating for dynamic blocks in another debate last year. I'm just trying to point out that it is not all white and black.
Similarly, the hard drive will move to a SSD based route and reach PB level storage with ease, cost is not a problem when it is mass produced. As to CPU limitation, special ASICs can be developed to accelerate the verification process, so that a 1GB block can be processed in seconds
No they won't reach that "with ease". Are you trying to say that I need to buy specialized hardware to run a node? You're completely ignoring propagation delay and orphans; are 1 GB blocks some joke? If average household want to see 4k live broadcast, they must get 1Gbps level optic fiber, and I think it will happen in latest 10 years
You're predicting that the average internet speed will be 1 Gbps in 2026? If so then you're most likely going to be wrong. Average internet speeds grew by only 10% in the last year (depending what your sources are). If you're talking solely about 1st world countries then there is a possibility.
|
"The Times 03/Jan/2009 Chancellor on brink of second bailout for banks" 😼 Bitcoin Core ( onion)
|
|
|
BlindMayorBitcorn
Legendary
Offline
Activity: 1260
Merit: 1116
|
|
January 29, 2016, 10:35:53 AM |
|
|
Forgive my petulance and oft-times, I fear, ill-founded criticisms, and forgive me that I have, by this time, made your eyes and head ache with my long letter. But I cannot forgo hastily the pleasure and pride of thus conversing with you.
|
|
|
BlindMayorBitcorn
Legendary
Offline
Activity: 1260
Merit: 1116
|
|
January 29, 2016, 10:55:43 AM |
|
A House of Lords inquiry has called for a broad investigation of the limited competition in the UK audit market, while also accusing bank auditors of being “disconcertingly complacent” about their role in the financial crisis.
The dominance of the four biggest auditors – Deloitte, Ernst & Young, KPMG and PwC – merited a detailed probe by the Office of Fair Trading, according to the House of Lords economic affairs committee.
Like the mythical ouroboros. Well played Blockstream!
|
Forgive my petulance and oft-times, I fear, ill-founded criticisms, and forgive me that I have, by this time, made your eyes and head ache with my long letter. But I cannot forgo hastily the pleasure and pride of thus conversing with you.
|
|
|
gmaxwell
Staff
Legendary
Offline
Activity: 4298
Merit: 8848
|
|
January 29, 2016, 10:59:29 AM |
|
The Toomininstas are confronting the same problem the Gavinistas did, which is that multiplying a tiny number such as 3tps by another tiny (ie sane) number such as 2 or 4 or even 8 still only produces another tiny number such as 6tps or 12tps or 24tps.
You can't get to Visa tps from here. Our only realistic path to Visa is orthogonal scaling, where each tx does the maximum economic work possible.
Right, card payments are currently at around 5000 TPS on a _year round_ average basis, with highest day peaks probably at over 100k TPS. And that is now, these figures have been rapidly growing. These are numbers high enough that just signature processing would completely crush even high end commercially available single systems. Even if you took some really great drugs and believed plain-old-bitcoin could get anywhere near matching that in the near future while having any decentralization remaining.... so what? it wouldn't be close again after just a couple more years growth. This sounds like a MBA school failure example: Ignoring your strengths and trying to match one on one with a competitor in an area where you're weakest and the slowest to improve and they're strongest and the fastest to improve. Especially since considering payment networks as the primary competition for a _currency_ is a bit bizarre. Yet a payment network is all that some have wanted Bitcoin to be... I don't begrudge that, but the weirdness of the goal doesn't relieve it from having to have a logically consistent roadmap, or permit it to take down the whole currency in its failure. A trip to the moon requires a rocket with multiple stages or otherwise the rocket equation will eat your lunch... packing everyone in clown-car style into a trebuchet and hoping for success is right out.
|
|
|
|
ATguy
|
|
January 29, 2016, 11:00:11 AM |
|
The Toomininstas are confronting the same problem the Gavinistas did, which is that multiplying a tiny number such as 3tps by another tiny (ie sane) number such as 2 or 4 or even 8 still only produces another tiny number such as 6tps or 12tps or 24tps.
You can't get to Visa tps from here. Our only realistic path to Visa is orthogonal scaling, where each tx does the maximum economic work possible.
Obviously you cant scale to Visa with onchain transactions and keep node decentralization. But it does not mean you cant increase to 6tps or 12tps right now and give users much better experience with Bitcoin so it is well worth while keeping node decentralization, not insignificiant and unnecessary as you say. If you prove we will not see over 10 TB HDDs in mass production in the following 6 years, then you can get some big blockers more sceptical about scaling above 2 or 4 MB. So go ahead.
You said 10-100TB, which implied a tenfold increase in 6 years. This is not likely. Today it is not uncommon people have 2 TB with minority of nerds having 8 TB/16 TB. It is not unreasonable to expect in 6 years replace the terms in the sentence above with 10 TB, 50 TB/100 TB. This is what I meant. Sorry for confussion if you thought average 55 TB in 6 years. so you will need to really spend most time on last about 2 months of syncing only and not about half a year as of now... pretty easy fix helping scalability for new nodes syncing from 0
Your solution is checkpoints? No, it can help a bit though if the checkpoints are updated more often. If you dont care read and think about my post what I really meant why you reply then...
|
|
|
|
Carlton Banks
Legendary
Offline
Activity: 3430
Merit: 3080
|
|
January 29, 2016, 11:16:12 AM Last edit: January 29, 2016, 11:58:05 AM by Carlton Banks |
|
A trip to the moon requires a rocket with multiple stages or otherwise the rocket equation will eat your lunch... packing everyone in clown-car style into a trebuchet and hoping for success is right out.
Despite the hyperbole... how much of an exaggeration is this, exactly? I find it difficult to believe that Hearn or Garzik's play were ever expected as serious successes by their devisers. I have a feeling the long term strategy is to maintain as diverse a ranges of pressures on all stakeholders as possible, when all of a sudden a bunch of GE/IBM/Google types will turn up in the metaphorical DeLorean/Ghostbusters van, some Egon Spengler/Christopher Lloyd type exclaiming: "We're the Bitcoin Doctors, and we're here to sort this whole mess out for you all! Don't thank us now..."
|
Vires in numeris
|
|
|
|