Judging by the lack of coins showing up on the bitcoinocracy polls, the vigorously attacking groups may not be big investors in Bitcoin. Your incessant use of this canard only weakens your arguments. bitcoinocracy is no more credible than consider.it. Bitcoinocracy is immune to Sybil attack, because you must prove ownership of BTC to participate. Consider.it is so open to Sybil attack it might as well be a honeypot. The former is signal, the latter is noise. No. All the above is noise. That's my point. The only signal is the emergent consensus of the network. The only vote that matters is the ultimate decision on who accepts what within a block. That majority will determine the longest chain. The longest chain, in turn, defines Bitcoin. Anything other than that is an unreliable 'measure' of power within the system.
|
|
|
In one hand they are stopping block size increase citing lack of consensus, and in other hand they are force feeding RBF & SegWit without consensus.
Removing the rules against actions that the network protocol expressly forbids against the will of an economically significant portion of users, and risking a persistent ledger split in the process is not a comparable thing. It's something that Bitcoin Core strongly believe it does not have the moral or technical authority to do, and attempting to do so would be a failure to uphold the principles of the system. It's not something to do lightly, and people who think that it's okay to change the system's rules out from under users who own coins in it are not people that I'd want to be taking advice from-- that kind of thinking is counter to the entire Bitcoin value proposition. So just to be clear - do you maintain that block size increases are necessarily "Removing the rules against actions that the network protocol expressly forbids", and are therefore necessarily evil? Finally--at some point the capacity increases from the above may not be enough. Delivery on relay improvements, segwit fraud proofs, dynamic block size controls, and other advances in technology will reduce the risk and therefore controversy around moderate block size increase proposals (such as 2/4/8 rescaled to respect segwit's increase).
- Capacity increases for the Bitcoin system: https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2015-December/011865.htmlIf a miner violates the hard rules of the system they are simply not miners anymore as far as all the nodes are concerned.
For better or worse (I would say for worse), in this era of industrial mining, non-mining nodes have essentially zero power. Any viable mining operation has sufficient resources to run a node of its own, and connect explicitly to other mining entities that share its philosophy. The only power outside of miners is the threat that users abandon the chain en masse.
|
|
|
Judging by the lack of coins showing up on the bitcoinocracy polls, the vigorously attacking groups may not be big investors in Bitcoin. Your incessant use of this canard only weakens your arguments. bitcoinocracy is no more credible than consider.it.
|
|
|
No. As I have pointed out to you before, with SegWit, all nodes that want to operate in a trustless manner require processing the signature chain as well as the 'blockchain-minus'. There is NO efficiency gain for a node operating in a trustless manner. What you call 'efficiency gain of SegWit' is only achievable by nodes that must trust other nodes to perform validation for them.
As such, SegWit does nothing for the centralization issue.
You quoted a post of mine related to LN and talk about Segwit? If you want to have a proper discussion join us at IRC; this toxic environment is fruitless and I don't even see half the posts in the thread anymore. Sorry. Got my signals crossed. Though this thread in entirety is entitled Estranged Core Developer Gavin Andresen Finally Makes Sensible 2MB BIP Proposal!. Seems relevant. That, and I did reply directly to a point you made about "block size". To return to your side branch, though, perhaps you can tell me in what manner you think LN contributes to decentralization. The way I see it, it will inevitably end up a hub/spoke system - meaning more centralization. Also with node and path discovery being mediated by (other?) centralized actors.
|
|
|
More decentralized? I have not stated this. I have stated that LN is decentralized and secure which is correct; you can't be the judge of this because you don't have a technical background as you've said, right? If anything Bitcoin will be very centralized if we focus on scaling via the block size. You can't deny this.
No. As I have pointed out to you before, with SegWit, all nodes that want to operate in a trustless manner require processing the signature chain as well as the 'blockchain-minus'. There is NO efficiency gain for a node operating in a trustless manner. What you call 'efficiency gain of SegWit' is only achievable by nodes that must trust other nodes to perform validation for them. As such, SegWit does nothing for the centralization issue.
|
|
|
Doing both seems ok to me but does contradict the wisdom of only making one change at a time. Can we change one and then a little later the other or is there some compelling reason to do them at the same time?
The SegWit Omnibus is already several significant independent changes being rolled out as a single release.
|
|
|
RIPPLE .
Hey - remember back when Ripple's market cap was greater than Bitcoin's? Yeah - those were the days.
|
|
|
.... come back to bitcoin from altcoins like ETH.
According to coinmarketcap, as of yesterday, Bitcoin's share of the entire crypto space was 88%. That seems around last year's mean, if memory serves. IOW, there has not been a significant recent flight from Bitcoin to alts.
|
|
|
I use various services to hold my coins. I recognize that I'm dependent on what the institutions do ... No, please don't. If you like to keep your bitcoins save against hacked exchanges, yourself or hardware failure, just create a paper wallet at https://www.bitaddress.org/Of course, if you do, you have no assurance that bitaddress.org does not have a copy of your private key. Sure, it is said that the code runs locally. Did you audit it? At that instance? If you're gonna be paranoid, you better noid harder. Note: you can download the bitaddress page locally, and run it on an airgapped computer. Not too complex an operation to be sure your funds are safe.
|
|
|
And imbues my focus with hysteria? uhm, ok As compared to you attempting to ridicule me for pointing out that this particular analysis is inapplicable to the topic under discussion? Certainly. Meanwhile, anyone who actually gives a damn about understanding the issue will still have access to the link I posted.
Except, of course, that the linked material is irrelevant to 'the issue'. eta: I can only assume you didn't read the link to OrganOfCorti's blog I posted, since he demonstrates quite rigorously how that can, in fact, happen.
I can only assume you did not read it in entirety, as the comments therein have OrganOfCorti admitting that his analysis is off in its conclusions. By roughly two orders of magnitude. And contains an unfulfilled statement that it will be updated to fix this error. I can only assume you did not read the followon info, which shows that it would take at least six years for any discernible chance that the 75% fork would false trigger at any less than 70% actual. But by all means, keep repeating the same inapplicable statement.
|
|
|
OrganOfCorti's post seems to focus upon mathematics, yours seems to focus upon hysteria.
Dude, I posted a link. Who's the hysterical one here again? A link, coupled with a statement implying that the material in the link was applicable to the particular 75% fork threshold being discussed, and that it necessarily indicated a 'problem': If you are really interested in why 75% is not enough, the definitive answer...
|
|
|
Because the stated 75% criteria produces a negligible yet discernible chance that we get a false trigger at an actual 67% adoption rate due to variance? Yawn. Troll harder. Uhm, no. He shows pretty conclusively that 5. Summary As it stands, the BIP101 has implementation flaws that could cause BIP101 activation with a significantly sub-supermajority, or (in the presence of fake BIP101 voters) a minority. It is almost certain that if BIP101 is activated, it will be with a sub-supermajority, or even a minority.
It also allows true proportions of fake voters to be sufficiently low that it becomes quite possible for one large mining pool or a couple of smaller ones in collusion contributing fake BIP101 votes to cause premature BIP101 activation.
Emphasis is mine. If you want to present math that disproves OrganofCorti's, feel free. Calling me a troll for pointing out OrganOfCorti's excellent blog post is just childish. It is not so much the math, but interpreting what it means in real terms. OrganOfCorti's post seems to focus upon mathematics, yours seems to focus upon hysteria. Shall we analyze this together? The damning part of the analysis is specific to BIP 101, and assumes 'vote spoofing' upon the part of nefarious actors. Such is fine, and important in regards to analyzing the BIP 101 situation. However, you present it as if it is universally-applicable to any 75% proposal. 'Flaw #3' is inapplicable to any situation where the tabulation of 75% is strictly based upon hash power. So yes, it seems to me that you are trolling. Even so, there is an admitted latent issue with the order of magnitude in the comments, unaddressed for months. Anonymous31 August 2015 at 03:46 > The number of failure attempts before a success occurs in trials of this type is called a geometrically distributed random variable, and can be used to find the probability of some arbitrary true proportion resulting in more than 749 blocks of a sequential 1,000, after that true proportion has been present for some number of blocks. This is incorrect, as overlapping sequences are extremely correlated. Treating overlapping sequences as independent trials will massively overestimate the chances of success. The expected time for a 0.7 proportion of hashrate to result in a 0.75 proportion of blocks is closer to 300,000. http://bitco.in/forum/threads/triggering-the-bip101-fork-early-with-less-than-75-miners.13/Reply Organ Ofcorti31 August 2015 at 16:18 Yes and I feel a bit silly about missing that! I realised it after a redditor commented: https://www.reddit.com/r/Bitcoin/comments/3ilwq1/bip101_implementation_flaws/cuhy71qI'll be posting an update after I get the weekly stats out. I haven't had time to figure out an analytical approach, but I'll generate some nice plots based on simulations. Or more importantly, the amount of time for variance to result in a false trigger is important. The other analysis in the quoted comment above puts the chance as " TL;DR: At anything less than 70% of steady hashrate, triggering a fork would take at least 6 years, and gets exponentially less likely as miner share decreases." Even the bulk of Core devs are seeming to claim that it will be necessary in the not-too-distant future to increase the block size. Just not now. For some unstated reason. If variance results in a trigger after Core would have already increased the block size anyhow, then the trigger is a non-event. No chain fork results. So what?
|
|
|
Yes, they can. And they must process more than 1MB of data in order to validate what they are calling a 1MB block. The difference in quantity of data, of course, being the amount of data in the signatures.
That was what I initially said. This depends on the exact quantity of data though, just how much more are we talking about here? So tell me again why this does requirement for more data not lead to node centralization, when many SegWit boosters (perhaps not yourself - I can't keep track) rely upon 'increasing block size to 2MB will lead to node centralization' as one of their strongest arguments? Or at least as their only argument for their claim that a simple block size increase is unsafe?
I don't rely on that argument but I have surely used it a number of times (can't recall all the discussions). I'm just not aware of an estimated factor of increase and have chosen to ignore this information until I have one. Has someone done the math? In Wuillie's presentation at Scaling Bitcoin Hong Kong, I believe* he stated that by looking at recent transactions, the current scaling factor would seem to be about 1.75x. Or that signature data is slightly less than half the block data. *my memory is at times faulty. But I'll stick to a claim that some prominent Core dev stated that analysis of recent blocks led to this figure. Others I have seen use a 4x figure - totally dependent upon an assumption that multisig become a much larger portion of the transaction volume. For this little branch of the discussion, the salient point is that any node that performs validation (i.e. any node operating in a trustless manner) must process not the claimed 1MB block size worth of data, but an amount of data that reflects the signature data as well - 1MB multiplied by this (instantaneous) scaling factor, be it 1.75x, 4x, or whatever else represents the proportion of signature data associated with the transactions included in that block.
|
|
|
Because the stated 75% criteria produces a negligible yet discernible chance that we get a false trigger at an actual 67% adoption rate due to variance? Yawn. Troll harder.
|
|
|
(bold not in original)
Please define what you mean by 'upgraded client'.
A client that supports Segwit after the activation occurs. Those clients can download and validate the data. Yes, they can. And they must process more than 1MB of data in order to validate what they are calling a 1MB block. The difference in quantity of data, of course, being the amount of data in the signatures. So tell me again why this does requirement for more data not lead to node centralization, when many SegWit boosters (perhaps not yourself - I can't keep track) rely upon ' increasing block size to 2MB will lead to node centralization' as one of their strongest arguments? Or at least as their only argument for their claim that a simple block size increase is unsafe?
|
|
|
I can't tell what is subject and what is object in your reply. But if I have your words parsed properly, than I believe you are making a false statement. Let me try again.
My post is valid. In Segwit transacting between upgraded clients becomes more efficient; there is no increase in capacity for nodes that have not upgraded (or non-Segwit nodes). They are able to receive the data but are unable to validate it. If it is able to validate it then the client it is a segwit node. Not sure if this was changed in any way since the last time I've read about it (there's also a proposal in regards to it by Peter Todd which I've yet to fully read). (bold not in original) Please define what you mean by 'upgraded client'. If such a client is getting a 'capacity boost', the only way this can be accomplished is by that node ignoring signature data. Ignoring signature data in and of itself makes that node dependent upon others to perform validation. Accordingly, such a node cannot operate in a trustless manner. It is insecure.
|
|
|
Yes. Seriously. Did you buy this account?
Would you like me to create a signed message using the 1ciyam address to prove it? No need. I'll interpret this merely as you replying 'No'. It is just that I remember interactions in years past whereby I thought you typically provided solid reasoning. Your posts of the last month or so don't seem so to me. More along the lines of axioms stated as proven fact followed by conclusions with no intervening reasoning. Sorry - just calling it as I see it.
|
|
|
Well, no. Only for non-fully-validating nodes. All nodes that ignore the signature chain will need to trust other -- fully-validating nodes (which get no capacity boost) -- to do the validation for them.
It does for those who upgrade; those that don't upgrade never needed the increase in capacity. I can't tell what is subject and what is object in your reply. But if I have your words parsed properly, than I believe you are making a false statement. Let me try again. For any given node: - In order to operate in a trustless manner, a node must fully validate - In order to fully validate, a node must have access to signature data - SegWit partitions transaction data into two chains - the operational data and the signature data - the 'capacity increase' that SegWit claims is entirely due to the fact that they don't tabulate the signature data as being part of the block size accounting Ergo, any node wishing to operate in a trustless manner does not get any 'block size increase' You may reply 'but fraud proofs'. But this is yet another mechanism entirely dependent upon outsourcing validation to other nodes. IOW, not trustless. Insecure.
|
|
|
If you are an exchange and you decide to "remain open" then you are gambling (i.e. if you end up on the wrong fork and you have paid out fiat to people then you have just lost that fiat and gained nothing of any value).
I don't think that exchanges are going to be so brave as to want to gamble.
Nonsense. Any rational exchange, seeing the threshold nearing, will implement trading for both chains independently. With tools to allocate to such chain funds that are being spent from the common trunk as desired by the nominal 'owner' of such funds. Any exchange that does not is incompetent, and deserves to be put out of business. I note that even SegWitters are starting to recognize the possibility of chain splitting due to the SegWit fork, anyways.
|
|
|
Bitcoin is not a political thing
In and of the fact that politics is nothing more or less than the adjudication of who wields power in a group dynamic, this particular part of Bitcoin is indeed 100% political. One group (small blockers) want one thing, and another group (big blockers) want another. What outcome wins? It will be decided by those who amass political power. Period. Sure, it all has to operate within the constraints of what technology allows, but it is still political. All your pollyanna posturing can't change that fundamental truth.
|
|
|
|