VeritasSapere
|
|
October 19, 2015, 05:31:14 PM |
|
I would consider the limit to be artificial if it is not necessary in order to keep Bitcoin decentralized and free. Since when I do the cost versus benefit analysis there definitely seems to be more benefit to increasing the blocksize then there is cost. This evaluation is of course highly subjective and depends on our own ideologies which determine the outcome of our conclusions. I want Bitcoin to be decentralized and free and I do not think that increasing the blocksize compromises these principles, actually with everything considered I actually think these principles would be strengthened with an increase in the blocksize. This does need to be a balancing act however, there are most certainly negative externalities on both ends of this scale and a middle ground will most likely end up being the best solution.
That's the whole point. We know an uncapped fee market under current conditions will strongly push towards centralisation. So it's a very reasonable measure to have, even though 1MB is already on the big side. Most people in most countries cannot reasonably afford to run a node, even in developed countries. I suppose this is where we disagree, since I think that the blocksize can be increased without increasing centralization more then leaving it at one megabyte would. Since there are also centralization pressures when the blocksize is to low. I can agree with you however that having a cap is a good thing and that currently it is sufficiently large enough and that under perfect conditions it might even be to large from a purely technical perspective. However since the cap is presently static it is highly likely that the present cap will not be ideal under future unknown conditions. Can you agree with the concept that the blocksize will most likely need to be increased in the future if technology and adoption increases as well?
|
|
|
|
muyuu
Donator
Legendary
Offline
Activity: 980
Merit: 1000
|
|
October 19, 2015, 05:43:00 PM |
|
I suppose this is where we disagree, since I think that the blocksize can be increased without increasing centralization more then leaving it at one megabyte would. Since there are also centralization pressures when the blocksize is to low.
I can agree with you however that having a cap is a good thing and that currently it is sufficiently large enough and that under perfect conditions it might even be to large from a purely technical perspective. However since the cap is presently static it is highly likely that the present cap will not be ideal under future unknown conditions. Can you agree with the concept that the blocksize will most likely need to be increased in the future if technology and adoption increases as well?
How many domestic nodes do you run? I run 1 domestic node, plus 2 nodes in VPS services abroad. Domestic nodes are essential for the system. And I'm struggling to keep up already, on my optic fibre consumer grade connection. Most people wouldn't bother with the burden for no real benefit of running in constantly.
|
GPG ID: 7294199D - OTC ID: muyuu (470F97EB7294199D) forum tea fund BTC 1Epv7KHbNjYzqYVhTCgXWYhGSkv7BuKGEU DOGE DF1eTJ2vsxjHpmmbKu9jpqsrg5uyQLWksM CAP F1MzvmmHwP2UhFq82NQT7qDU9NQ8oQbtkQ
|
|
|
DooMAD
Legendary
Offline
Activity: 3948
Merit: 3191
Leave no FUD unchallenged
|
|
October 19, 2015, 05:50:51 PM |
|
Anti-BIP101 thread has 101 pages. Perhaps it's some sort of coded endorsement. On a more serious note, was the reason for the long gap between the two scaling conferences intentional to let things simmer down a bit? Or was it purely for logistical reasons? Getting a kind of "in limbo" vibe at the moment.
|
|
|
|
VeritasSapere
|
|
October 19, 2015, 06:00:00 PM |
|
I suppose this is where we disagree, since I think that the blocksize can be increased without increasing centralization more then leaving it at one megabyte would. Since there are also centralization pressures when the blocksize is to low.
I can agree with you however that having a cap is a good thing and that currently it is sufficiently large enough and that under perfect conditions it might even be to large from a purely technical perspective. However since the cap is presently static it is highly likely that the present cap will not be ideal under future unknown conditions. Can you agree with the concept that the blocksize will most likely need to be increased in the future if technology and adoption increases as well?
How many domestic nodes do you run? I run 1 domestic node, plus 2 nodes in VPS services abroad. Domestic nodes are essential for the system. And I'm struggling to keep up already, on my optic fibre consumer grade connection. Most people wouldn't bother with the burden for no real benefit of running in constantly. I am running two full nodes from my home. I have calculated that I can personally also support much larger blocks from my home. However I am in the global minority with such good connections. In terms of most people not bothering I think that is already the case, I am only running my full nodes out of altruism after all. Most people do not run full nodes anyway, I do not actually have a problem with most people not running full nodes, it was never the intended configuration for large scale deployment after all. So over the long term I would expect most full nodes to be hosted in data centers and I do not really see a problem with that as long as the timing aligns well with adoption which would help counteract the problem of node centralization.
|
|
|
|
hdbuck
Legendary
Offline
Activity: 1260
Merit: 1002
|
|
October 19, 2015, 08:35:58 PM |
|
Anti-BIP101 thread has 101 pages. Perhaps it's some sort of coded endorsement. ouhh nice one mate.. wonder how much pages the XTturders thread on the UnCeNs0rDED F00LRUM has tho. not like i'd even spend the couple clicks to see for myself. ^^
|
|
|
|
muyuu
Donator
Legendary
Offline
Activity: 980
Merit: 1000
|
|
October 20, 2015, 08:02:22 AM |
|
I am running two full nodes from my home. I have calculated that I can personally also support much larger blocks from my home. However I am in the global minority with such good connections. In terms of most people not bothering I think that is already the case, I am only running my full nodes out of altruism after all. Most people do not run full nodes anyway, I do not actually have a problem with most people not running full nodes, it was never the intended configuration for large scale deployment after all. So over the long term I would expect most full nodes to be hosted in data centers and I do not really see a problem with that as long as the timing aligns well with adoption which would help counteract the problem of node centralization.
Any realistic "calculations" will have to involve how many people do you expect to deal with the kind of burden and spurts/outages that a node puts in your network so long you have it set up in such a way that will help more than hurt the P2P network. So this is not easy or tractable at all. What we know for sure is that nodes are dropping steadily toward dangerous numbers, even including situations like yours and mine which means the numbers are inflated (2 in the same home are less significant that 1-node-home, VPS nodes also are less helpful for the network in terms of decentralisation). Current average bandwidth with approx 750KB real block sizes mean 150-200GB of monthly bandwidth usage, peaks during node propagation that will stall your connection for seconds (spurts of blocksize * maxconnections which should be at least 5 to be solid, in reality 8+ would be recommendable), that makes gaming or videoconferencing pretty much a non starter already unless you stop the node (or worse, throttle it worsening latency). And that is now, not 8MB or something very drastic like that for January being suggested for BIP101 and 8GB down the line. There's also using 50GB+ for the blockchain currently, as pruned Core doesn't currently support wallets and won't for a bit more. Most people are already out as is. Check out https://www.reddit.com/r/Bitcoin/comments/3p5n9c/number_of_bitcoin_nodes_is_at_a_6_year_low_if_you/ and this is mostly first-worlders speaking there. Also, the node conversation is just one effect, there's miners also suffering latency and orphan rates that might force them to soft fork or feather fork creating a mess in the mining space, because regional clusters might start to appear and make large chain forks common. When pruning support is finalised, and the major propagation optimisations are improved, then at that point 2MB seem possible and probably more, but not just because we like the number but we start deploying and observing that the transition is working safely. And when I say that 1MB is already large I mean it, because it's already pushing people to run nodes in a way that isn't helpful but harmful, with low maxconnections under 8 and even under 5. And we have reasons to believe many are running in VPS and belong to much fewer people than numbers suggest. We might have NXT style attacks soon.
|
GPG ID: 7294199D - OTC ID: muyuu (470F97EB7294199D) forum tea fund BTC 1Epv7KHbNjYzqYVhTCgXWYhGSkv7BuKGEU DOGE DF1eTJ2vsxjHpmmbKu9jpqsrg5uyQLWksM CAP F1MzvmmHwP2UhFq82NQT7qDU9NQ8oQbtkQ
|
|
|
Lauda
Legendary
Offline
Activity: 2674
Merit: 2970
Terminated.
|
|
October 20, 2015, 08:17:02 AM |
|
He hasn't for 2 years unless I missed something recently.
Correct. Neither he nor Hearn have had any contributions in a while. I wonder why that is? Hearn did want to push buggy code to Core at times (check github).
Some people are still going in circle apparently.
|
"The Times 03/Jan/2009 Chancellor on brink of second bailout for banks" 😼 Bitcoin Core ( onion)
|
|
|
muyuu
Donator
Legendary
Offline
Activity: 980
Merit: 1000
|
|
October 20, 2015, 09:59:36 AM |
|
|
GPG ID: 7294199D - OTC ID: muyuu (470F97EB7294199D) forum tea fund BTC 1Epv7KHbNjYzqYVhTCgXWYhGSkv7BuKGEU DOGE DF1eTJ2vsxjHpmmbKu9jpqsrg5uyQLWksM CAP F1MzvmmHwP2UhFq82NQT7qDU9NQ8oQbtkQ
|
|
|
hdbuck
Legendary
Offline
Activity: 1260
Merit: 1002
|
|
October 20, 2015, 12:38:22 PM |
|
He hasn't for 2 years unless I missed something recently.
Correct. Neither he nor Hearn have had any contributions in a while. I wonder why that is? Hearn did want to push buggy code to Core at times (check github).
Some people are still going in circle apparently. Hearn never made a single contribution to core.
|
|
|
|
muyuu
Donator
Legendary
Offline
Activity: 980
Merit: 1000
|
|
October 20, 2015, 01:07:58 PM |
|
He hasn't for 2 years unless I missed something recently.
Correct. Neither he nor Hearn have had any contributions in a while. I wonder why that is? Hearn did want to push buggy code to Core at times (check github).
Some people are still going in circle apparently. Hearn never made a single contribution to core. The switch to LevelDB IIRC. Which by the way, continues to suck. And caused a pretty catastrophic accidental hard fork.
|
GPG ID: 7294199D - OTC ID: muyuu (470F97EB7294199D) forum tea fund BTC 1Epv7KHbNjYzqYVhTCgXWYhGSkv7BuKGEU DOGE DF1eTJ2vsxjHpmmbKu9jpqsrg5uyQLWksM CAP F1MzvmmHwP2UhFq82NQT7qDU9NQ8oQbtkQ
|
|
|
marcus_of_augustus
Legendary
Offline
Activity: 3920
Merit: 2349
Eadem mutata resurgo
|
|
October 20, 2015, 01:12:12 PM |
|
The switch to LevelDB IIRC. Which by the way, continues to suck. And caused a pretty catastrophic accidental hard fork. ... the first attempt to hardfork onto a big block chain.
|
|
|
|
brg444 (OP)
|
|
October 20, 2015, 02:01:04 PM |
|
When pruning support is finalised, and the major propagation optimisations are improved, then at that point 2MB seem possible and probably more, but not just because we like the number but we start deploying and observing that the transition is working safely.
Pruned nodes still need to initially download and verify the whole blockchain which remains one of the major pain points in running a node. Moreover we can not rely only on a network of pruned nodes.
|
"I believe this will be the ultimate fate of Bitcoin, to be the "high-powered money" that serves as a reserve currency for banks that issue their own digital cash." Hal Finney, Dec. 2010
|
|
|
muyuu
Donator
Legendary
Offline
Activity: 980
Merit: 1000
|
|
October 20, 2015, 02:05:58 PM |
|
When pruning support is finalised, and the major propagation optimisations are improved, then at that point 2MB seem possible and probably more, but not just because we like the number but we start deploying and observing that the transition is working safely.
Pruned nodes still need to initially download and verify the whole blockchain which remains one of the major pain points in running a node. Moreover we can not rely only on a network of pruned nodes. Block propagation improvements were the important point in that post. Still, never a good thing to have more space taken. Just less punitive after the optimisations than before.
|
GPG ID: 7294199D - OTC ID: muyuu (470F97EB7294199D) forum tea fund BTC 1Epv7KHbNjYzqYVhTCgXWYhGSkv7BuKGEU DOGE DF1eTJ2vsxjHpmmbKu9jpqsrg5uyQLWksM CAP F1MzvmmHwP2UhFq82NQT7qDU9NQ8oQbtkQ
|
|
|
Zarathustra
Legendary
Offline
Activity: 1162
Merit: 1004
|
|
October 20, 2015, 06:00:58 PM |
|
Be Ready at Any Hour "But of that day and hour no one knows, not even the angels of heaven, nor the Son, but the Father alone." Matthew 24:36
You are such an easily mislead person. That's how it's supposed to work, using fee markets artificially subsided by the 1MB cap to determine optimal allocation of BS-chains and altcoins. It's great for a laugh! Yes, but it won't work. "You can fool some of the people all of the time, and all of the people some of the time, but you can't fool ALL of the people ALL of the time."
|
|
|
|
VeritasSapere
|
|
October 20, 2015, 06:16:28 PM Last edit: October 20, 2015, 06:37:07 PM by VeritasSapere |
|
I am running two full nodes from my home. I have calculated that I can personally also support much larger blocks from my home. However I am in the global minority with such good connections. In terms of most people not bothering I think that is already the case, I am only running my full nodes out of altruism after all. Most people do not run full nodes anyway, I do not actually have a problem with most people not running full nodes, it was never the intended configuration for large scale deployment after all. So over the long term I would expect most full nodes to be hosted in data centers and I do not really see a problem with that as long as the timing aligns well with adoption which would help counteract the problem of node centralization.
Any realistic "calculations" will have to involve how many people do you expect to deal with the kind of burden and spurts/outages that a node puts in your network so long you have it set up in such a way that will help more than hurt the P2P network. So this is not easy or tractable at all. What we know for sure is that nodes are dropping steadily toward dangerous numbers, even including situations like yours and mine which means the numbers are inflated (2 in the same home are less significant that 1-node-home, VPS nodes also are less helpful for the network in terms of decentralisation). Current average bandwidth with approx 750KB real block sizes mean 150-200GB of monthly bandwidth usage, peaks during node propagation that will stall your connection for seconds (spurts of blocksize * maxconnections which should be at least 5 to be solid, in reality 8+ would be recommendable), that makes gaming or videoconferencing pretty much a non starter already unless you stop the node (or worse, throttle it worsening latency). And that is now, not 8MB or something very drastic like that for January being suggested for BIP101 and 8GB down the line. There's also using 50GB+ for the blockchain currently, as pruned Core doesn't currently support wallets and won't for a bit more. Most people are already out as is. Check out https://www.reddit.com/r/Bitcoin/comments/3p5n9c/number_of_bitcoin_nodes_is_at_a_6_year_low_if_you/ and this is mostly first-worlders speaking there. Also, the node conversation is just one effect, there's miners also suffering latency and orphan rates that might force them to soft fork or feather fork creating a mess in the mining space, because regional clusters might start to appear and make large chain forks common. When pruning support is finalised, and the major propagation optimisations are improved, then at that point 2MB seem possible and probably more, but not just because we like the number but we start deploying and observing that the transition is working safely. And when I say that 1MB is already large I mean it, because it's already pushing people to run nodes in a way that isn't helpful but harmful, with low maxconnections under 8 and even under 5. And we have reasons to believe many are running in VPS and belong to much fewer people than numbers suggest. We might have NXT style attacks soon. I do not have any issues with running my nodes now in terms of bandwith, surfing, netflix, gaming, streaming ect. Everything works fine with the full nodes that I have setup and they do not interfere with the rest of the things I use the internet for in my home. I can also predict that eight megabyte blocks would also personally not be a problem for me. However like I said I am in the minority in terms of the quality of my connection here in the Netherlands, I can download over two hundred gigabytes overnight using bit torrent while still allowing enough bandwidth for my full nodes to operate. There has indeed been a trend in the reduction of full nodes over the last few years, which was to be expected from the introduction of simplified payment verification and the existence of custodian type services like Coinbase. Fortunately however recently at least it seems like it has stabilized somewhat most likely in part due to the existence of competing implementations, in this sense at least having competition on the implementation level can give people more reasons the run full nodes. It is also good to keep in mind that node count will increase with adoption, since when more people discover Bitcoin there will be increased numbers of people that have reasons to run full nodes, whether it is a business requiring independent full validation or it is individuals running full nodes for altruistic or idealistic reasons. Which is why I think that possibly hampering adoption in order to maintain decentralization would be counter productive. I recognize node centralization as being a definite problem and a valid criticism of BIP101 specifically. Mining centralization however I do not see as an issue that is effected by blocksize whatsoever. I have now written extensively on the subject and if you would like to know in more detail why I think that mining centralization is not effected by blocksize read my article on the subject, though in short, miners do not run full nodes, therefore they are not effected by the increased difficulty of running a full node. I am actually a miner myself, running a 10KW operation from my home, so I do understand some of the intricacies and nuances of mining today which is very different to just a few years ago. I do think that there is a threat of mining centralization however this threat primarily comes from the centralization of manufacturing and economies of scale. Pool centralization is a completely different issue which I am not overly concerned about presently, I think that this would only become a problem once mining becomes to centralized and then this would just be an extension of mining centralization in actuality. As a relatively small "home" miner, which are the types of miners which best serve decentralization I am in a good position to see how I would be effected by an increase in the blocksize, I can say now with absolute certainty that I would not be effected by such a change whatsoever, since I am pointing my hashing power towards a pool and I am not running a full node for the purposes of mining. Even though I do not think mining centralization is an issue relating to increasing the blocksize, I do recognize that node centralization is a legitimate issue related to increasing the blocksize. I can understand that you want optimizations to be completed before we increase the blocksize. I have a background in political philosophy which has taught me that sometimes a perfect choice does not exist, and when we are faced with a moral dilemma we must still choose even if it is the lesser of two evils. My point being if these optimizations are not complete before adoption requires us to increase the blocksize in order to avoid transactions becoming unreliable and prohibitively expensive then we should do so. Since the alternative under that scenario would be worse. Since if decentralization and financial freedom is our goal then we should at least have a blocksize that maximizes these principles since keeping the blocksize to low for to long can also cause significant centralization pressures which I would argue would be worse then a moderate increase at such a time. To be clear I do not consider BIP101 a moderate increase, and I do favor a moderate increase, still waiting on either another alternative implementation or Core to implement it. I do think that changing it often in small steps in terms of hard forks might also not be particularly viable because of the political difficulty that would cause which is why I do prefer proposals with a fixed schedule or a dynamic limit. I did want to say here that I do respect your views. You are consistent in your believes and you are not resorting to sophistry and ad hominem towards me, unlike many of the other people on this thread. You stating that you think that the block size now might even be to large is evidence of the consistency of your logic and I can respect that, even though I do not agree with your conclusions. As I have stated before I think that the ideal solution will most likely be a middle ground in regards to the blocksize. Since there are centralization risks on either side of the spectrum. If the blocks became extremely large without a cap that could cause massive centralization risks, in the same sense keeping the blocks extremely small at one megabyte for a long period of time would also cause massive centralization risks. When faced with such decisions it is good to be pragmatic and important to weigh up all of the different variables that effect and are effected by this decision, which are vast and multidisciplinary in nature.
|
|
|
|
brg444 (OP)
|
|
October 20, 2015, 08:07:25 PM |
|
I do not have any issues with running my nodes now in terms of bandwith, surfing, netflix, gaming, streaming ect. Everything works fine with the full nodes that I have setup and they do not interfere with the rest of the things I use the internet for in my home. I can also predict that eight megabyte blocks would also personally not be a problem for me. However like I said I am in the minority in terms of the quality of my connection here in the Netherlands, I can download over two hundred gigabytes overnight using bit torrent while still allowing enough bandwidth for my full nodes to operate. We could all careless about your own node experience. The only important node is the one I run. There has indeed been a trend in the reduction of full nodes over the last few years, which was to be expected from the introduction of simplified payment verification and the existence of custodian type services like Coinbase. Fortunately however recently at least it seems like it has stabilized somewhat most likely in part due to the existence of competing implementations, in this sense at least having competition on the implementation level can give people more reasons the run full nodes.
It is also good to keep in mind that node count will increase with adoption, since when more people discover Bitcoin there will be increased numbers of people that have reasons to run full nodes, whether it is a business requiring independent full validation or it is individuals running full nodes for altruistic or idealistic reasons. Which is why I think that possibly hampering adoption in order to maintain decentralization would be counter productive. So first you recognize the negative trend in the context of major historic growth in adoption then you go on to propose the exact opposite, that node count will increase with adoption when all signs point to the opposite. Hampering decentralization in order to further adoption is not counter productive?
|
"I believe this will be the ultimate fate of Bitcoin, to be the "high-powered money" that serves as a reserve currency for banks that issue their own digital cash." Hal Finney, Dec. 2010
|
|
|
RoadTrain
Legendary
Offline
Activity: 1386
Merit: 1009
|
|
October 20, 2015, 09:53:11 PM |
|
The switch to LevelDB IIRC. Which by the way, continues to suck. And caused a pretty catastrophic accidental hard fork. ... the first attempt to hardfork onto a big block chain. I laughed hard, thank you
|
|
|
|
tvbcof
Legendary
Offline
Activity: 4774
Merit: 1283
|
|
October 21, 2015, 02:18:42 AM |
|
The switch to LevelDB IIRC. Which by the way, continues to suck. And caused a pretty catastrophic accidental hard fork. ... the first attempt to hardfork onto a big block chain. I laughed hard, thank you I don't think it's a joke. When it was recognized that there was to be a hard-fork in association with the BDB mis-config, some decisions had to be made very quickly. I'm pretty sure that certain people seemed to be lobbying to nix the 1MB block limit at that time because it was a convenient time to do so. Even by that time it had been a source of heated debate for at least a year. I was not (and am not) and 'insider' so my visibility into things is limited and based somewhat on intuition and reading between the lines, but I'm pretty sure I remember things this way, and I'm pretty sure that it was the likely suspect who wanted to use the event as the excuse to bloat things. The event and decisions ultimately made had several main impacts on me: 1) It gave me to much confidence in Gavin's disposition and judgement and it took longer than it might have for this to wear off. It would have been interesting to be a fly on the wall when some of these decisions were being made in the heat of battle. Perhaps I mis-estimated how much impact he had and/or on which side of the equation. 2) I've felt very strongly about the bloat issue since I got hooked in in 2011. The bloat attempt associated with that event was so upsetting to me that I bought a couple of domain names and expended some effort imagining how Bitcoin might scale without killing it, and in the event that the 'dark side' won the battles and the rest of us had to try to do some sort of a salvage operation and make the best of a bad situation.
|
sig spam anywhere and self-moderated threads on the pol&soc board are for losers.
|
|
|
VeritasSapere
|
|
October 21, 2015, 01:07:20 PM |
|
I do not have any issues with running my nodes now in terms of bandwith, surfing, netflix, gaming, streaming ect. Everything works fine with the full nodes that I have setup and they do not interfere with the rest of the things I use the internet for in my home. I can also predict that eight megabyte blocks would also personally not be a problem for me. However like I said I am in the minority in terms of the quality of my connection here in the Netherlands, I can download over two hundred gigabytes overnight using bit torrent while still allowing enough bandwidth for my full nodes to operate. We could all careless about your own node experience. The only important node is the one I run. This seems like an untenable position, I suppose if you had a 56k dial up connection then the whole Bitcoin network would need to downscale so that you can run a full node? There has indeed been a trend in the reduction of full nodes over the last few years, which was to be expected from the introduction of simplified payment verification and the existence of custodian type services like Coinbase. Fortunately however recently at least it seems like it has stabilized somewhat most likely in part due to the existence of competing implementations, in this sense at least having competition on the implementation level can give people more reasons the run full nodes.
It is also good to keep in mind that node count will increase with adoption, since when more people discover Bitcoin there will be increased numbers of people that have reasons to run full nodes, whether it is a business requiring independent full validation or it is individuals running full nodes for altruistic or idealistic reasons. Which is why I think that possibly hampering adoption in order to maintain decentralization would be counter productive. So first you recognize the negative trend in the context of major historic growth in adoption then you go on to propose the exact opposite, that node count will increase with adoption when all signs point to the opposite. Hampering decentralization in order to further adoption is not counter productive? There can be multiple factors that can make up the trend, different forces pushing and pulling. It is completely logical to think that when more people discover Bitcoin there will be more people running full nodes. I even know people that have discovered Bitcoin recently that are now running full nodes. Increasing the block size does apply pressure to reduce the node count, however increased adoption does conversely also increase the node count. Therefore it would be counter productive to hamper adoption if our objective is to maximize decentralization and financial freedom.
|
|
|
|
muyuu
Donator
Legendary
Offline
Activity: 980
Merit: 1000
|
|
October 21, 2015, 02:00:35 PM |
|
This seems like an untenable position, I suppose if you had a 56k dial up connection then the whole Bitcoin network would need to downscale so that you can run a full node?
We already destroyed the chances of most possible domestic grade connections to run their own nodes (3G/4G mobile, dial-up, HAM radio, most Sat, etc). Currently only the top home broadband connections can realistically keep up at 400~700KB with 1MB peaks. Multiply it by 8 like BIP101 proposes as soon as January and almost no home users in the whole world will remain. Maybe in a couple jurisdictions. Also 8MB is the minimum, doubling up to 8GB is a fixed scaling scheme based on zero real information and pure speculation. This whole discussion is pointless if we don't agree to what is acceptable in this regard. This seems to be the case, because people like Gavin or Hearn think domestic users should not be nodes and have said so in numerous occasions. On the other hand most Core devs think it's essential. There is no possible reconciliation between these two positions, as I've said many times. The trade-offs vary wildly between them and the endgame of each of them is a completely different Bitcoin to the other. This is why I don't consider this to be a debate, the positions are clear and they are irreconcilable. Personally I side with Luke, at the far end that believes we should either freeze or scale down to allow tech to catch up so more people not less can afford to run their own nodes and control their transactions fully.
|
GPG ID: 7294199D - OTC ID: muyuu (470F97EB7294199D) forum tea fund BTC 1Epv7KHbNjYzqYVhTCgXWYhGSkv7BuKGEU DOGE DF1eTJ2vsxjHpmmbKu9jpqsrg5uyQLWksM CAP F1MzvmmHwP2UhFq82NQT7qDU9NQ8oQbtkQ
|
|
|
|