Bitcoin Forum
November 13, 2018, 05:26:27 PM *
News: Latest Bitcoin Core release: 0.17.0 [Torrent].
 
  Home Help Search Login Register More  
  Show Posts
Pages: « 1 [2] 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 ... 239 »
21  Bitcoin / Development & Technical Discussion / Re: The situation wrt 0.16.3 version highlights urgency of streamlining updates on: September 26, 2018, 03:19:18 AM
Non updated full nodes could conceivably put the entire network at risk in the future.

I don't agree.  If a user isn't accepting transactions on a node or mining with it, there is no particularly urgent reason to upgrade.  There are plenty of upgraded nodes on the network now.

The reason the notices encourage people to upgrade urgently is not because all actually need to, but because figuring out if you really need to or not is hard, so the best advice is for everyone to do it.

Quote
Updating to 0.16.3 from 0.16.0-2 for a non sophisticated Linux user may be prohibitively complex. Add to this the fact that a neophyte might not even become aware of the need to update at all in situations like the one encountered wrt the 0.16.3 update, and the urgency of streamlining the update process for end users seems to me to be pretty high.
That sounds like a reasonable concern but it has to be counterbalanced against the risk of a malicious update being deployed.  The fact that upgrades take a while makes me feel confident rather than frightened.

It's like the advice goes, "if you never miss a flight, you're probably wasting too much time in airports".  If we never had reason to wish updates went faster, we'd probably be excessively exposing users to the risk of a bad update.

If there ever were a really serious issue where updates had to happen or else, we'd be probably advising people to stop accepting confirmations and to potentially to turn their nodes off.
22  Bitcoin / Development & Technical Discussion / Re: The duplicate input vulnerability shouldn't be forgotten on: September 24, 2018, 04:19:15 AM
run multiple implementations of bitcoin (e.g. btcd and bitcoin core) and only transact while they are in agreeance.
Monitoring that way can be interesting (use old versions too)... but running them anywhere near proximity to production machines may increase the risk of RCEs and resource exhaustion attacks. Though since you only need a yes/no from the monitoring it could be isolated without too much trouble.  If this were considered a best practice, though, it would further increase the barrier of entry for participation.

You are a lot more advanced than many Bitcoin using businesses: you actually report bugs and help test fixes. For many others, it's remarkable if they do anything more than call out to a bc.i api. Someone on IRC was pointing out the rather disappointing number of bitcoin sites that were currently managing to expose the bitcoind rpc to the public internet.  Sad
23  Bitcoin / Development & Technical Discussion / Re: The duplicate input vulnerability shouldn't be forgotten on: September 24, 2018, 12:57:31 AM
You have attacked me with character attacks by viciously claiming on Reddit that I had sold my credentials,

This is the post that you're referring to:

Quote
I get the impression that cobra sold his credentials last year: He put up some sketchy warnings about the binaries on bitcoin.org then went quiet for a long time.

When he came back he started posting some really over the top rbtc conspiracy theory nonsense on reddit. When people moved to take action about it he suddenly said "oh my account was hacked" and dropped it. But the account wasn't use for the kind of petty vandalism that you normally see when a hacked account can't otherwise be used... Since then he's been slowly cranking up the psycho behavior, and right now he seems not far from the sudden behavior of the 'hacked' account.

Given that I'm not surprised to see the BCH pumping, and of course ignoring that whatever "better for payments" argument you can currently make for BCH could better be applied to Litecoin (which also has a lower interblock interval, AND segwit) and yet litecoin has mostly gone nowhere.

FWIW, no one wants a POW change more than Bitmain. They crank out chips privately for even obscure POWs then dump them on the public once they've reached diminishing returns on their own production. With sha256d they're competing against a huge installed base. Moreover, Bitmain has gone around unethically and unlawfully claiming patents on basic mining techniques like series wiring the chips to reduce convert costs which were in use prior to Bitmain and where any competitive mining device for any POW would adopt the same techniques.

I apologize for insulting you, it was really not my goal.  I'm not sure what else I was supposed to think when one day you're asking blockstream for money (and suggesting bitcoin.org/maybe you were broke) and then later [edit: I thought it was a few weeks, but it may just be that I only noticed the message then or there might have been more than one. I no longer have my blockstream account so I can't tell] started posting things like Merchant adoption will come naturally once people realize that the other coin is crippled by Blockstream and /u/nullc and that they can't transact without paying outrageous transaction fees. [...] Of course bitcoin.org should be changed to embrace Bitcoin Cash. Blockstream coin is not Bitcoin. [...] It's a form of censorship by Blockstream Core.  [...] This is what AXA invested in them for, to cripple the network. . I'd never seen you say anything like that before. And even more recently you continue to say things that look a lot like it to me, also, also, also (especially weird since you yourself told us downloads on bitcoin.org were unsafe, you seemed to think alternative downloads were a good idea, and then a year later are angry about it), also, also.   It's okay for you to go around suggesting "compromised by the NSA" without not a single shred of evidence, but  you think it's toxic for me to say that I "get [an] impression" and point out an apparent radical change in your behaviour? Sad

Theymos says he thinks you've been consistent all along, I trust him to know, but it's not like my comments were coming out of nowhere.  I had no reason to dislike you previously, in fact almost the sum total of my other interaction with you was stepping up to defend you when I thought people were unfairly attacking you after you said something easily misunderstood (like the 'revise the whitepaper' thing).

Why do you find it so insulting that I wondered if you sold your credentials -- with an explanation of my concerns-- after you start attacking someone whos done AFAIK nothing but support you previously but seem to think it's okay to spread worse claims about other people?

I'd say "what would you do in my shoes"-- but it seems like the answer is that you'd make accusations and not even provide evidence.  Is that really your intent?

Quote
but I remember you
Memory is a tricky thing. In fact, when writing the above I thought you multiple times also posted additional things that turned out to actually be people that responded to the things above agreeing with your attacks, but which weren't actually said by you-- sorry about that, but at least I haven't accused you of those things because I actually checked.

I am one of the only project contributors who actually takes the time to even try to communicate with people who seem to be significantly confused. 99.9% of the time other people will just ignore them completely. I don't think it helps improve the level of discourse if everyone puts themselves in a high castle and doesn't even hear out their opposition or respect them enough to even argue the case.  But at least when I'm critical of your actions I'm willing to be precise enough that you can defend or contextualize them... or even admit a mistake.  I've certainly made mistakes, but at least I've tried to do something good. I fee like your comments here-- with name calling like "incompetent"-- are saying that you'd prefer a world where no one does anything (except maybe insult and conspiracy theorize about others), because if the they do enough good things you'll ignore all that and attack them for the few things that could be improved.  If that isn't what you're going for, I'd really like you to help me understand where you're coming from.

Quote
But in Bitcoin Core, developers can work in their own branches with total freedom, and no red tape,
And no review, which was my point.

Quote
IMO there's no harm in making this step slightly longer
Perhaps, but I don't see how slightly longer connects with your post. Already the major release cycle is six months long.  This issue took two years to discover, making the cycle seven months long would not have made it get detected. But it might plausibly make people take review less seriously.  I guess my point there is that we've already made it a lot more than slightly longer, and tapped out the benefit from doing so, further increases might tip us further into the realm of costs exceeding benefits and there are other things we can do that don't add delays but would do more to prevent serious problems.
24  Bitcoin / Development & Technical Discussion / Re: The duplicate input vulnerability shouldn't be forgotten on: September 23, 2018, 08:04:04 PM
but you were recently someone who would attack other development teams trying to "improve the world" in their own way with even more harsh terms and toxic insults
It would be more productive if you were specific instead of vague. The vague allegations, devoid of context, just come across as toxic themselves-- a character attack, rather than a complaint about something specific that could be handled better.

I believe my suggested improvements to you above-- subscribe to the list, look for things things you can do to help-- were specific, actionable, well justified, and hopefully not at all personally insulting.

Quote
encourage users to mess with it, possibly even with some sort of bounty scheme, and once sufficient time has passed and lots of different people and companies in the industry have tried to break it, you merge it in
In general people ignore "test" things, they just don't interact with them at all unless the thing in question is on rails that will put them somewhere critical reasonably soon.

We see this pattern regularly, when people put up complex PRs and explicitly mark them as not-merge-ready they get fairly small amounts of substantive review, and then after marking them merge ready suddenly people show up and point out critical design mistakes. Resources are finite so generally people prefer to allocate them for things that will actually matter, rather than things that might turn out to be flights of fancy that never get finished or included. They aren't wrong to do it either, there are a lot more proposals and ideas than things that actually work and matter.  You can also see it in the fact that far more interesting issues are reported by users using mainnet rather than testnet.  This isn't an argument to not bother with pre-release testing and test environments: both do help. But I think that we're likely already getting most of the potential benefit out of these approaches.

"testing" versions also suffer from a lack of interaction with reality. Many issues are most easily detected in live fire use, sadly.  And many kinds of problems or limitations are not the sort of thing that we'd rather never have an improvement at all than suffer a bit of a problem with it.

Additionally, how much delay do you think is required? This issue went undetected for two years. Certainly with lower stakes we could not expect a "testing" version to find it _faster_.  Sticking every change behind a greater than two year delay would mean a massive increase in exposure from the lack of beneficial changes, not to mention the knock on cultural effects: I should have cited it specifically, but consider the study results suggesting that safety equipment like bike helments or seatbelts sometimes result in less safe behavior. Checklists can be valuable, for example, but have also been showen to cause a reduction in personal responsibility and careful consideration. "I don't have to worry about reviewing this thing, because any issues will be found by the process and/or will be someone elses problem if they get through".

As I said above, if we had a series of concerning issues being found shortly after releases it might be a good case for going more slowly, generally. But isn't the case. With only few exceptions when we find issues that are concerning enough to justify a fast fix they're old issues.  We do find many issues quickly, but before they're merged or released.

I've generally found that for software quality each new technique has its own burst of benefit, but continued deeper application of the technique has diminishing marginal returns. Essentially, any given approach to quality has a class of problems it stops substantially while being nearly blind to other kinds of issues. Even limited application tends to pick the low hanging fruit.  As a result its usually better to assure quality in many different ways by many different people, rather than dump inordinate effort into one or a few techniques.

Quote
You claim that moving slower would potentially result in less testing, which makes no sense.[...] I can't fathom how moving slower wouldn't help here.
I'm disappointed, I think I explained directly and via analogy as to why this is the case but it doesn't seem to have been communicated to you. Perhaps someone else will give a go at translating the point, if its still unclear. Sad
25  Bitcoin / Development & Technical Discussion / Re: The duplicate input vulnerability shouldn't be forgotten on: September 23, 2018, 07:50:05 PM
but I think it is more about what is right and what to do rather than rejecting false ideas and pointing out what should not be done.
I think I did speak to it some.

More directly: Rather than shy from danger we must continue to embrace it and manage it.

The safest change is a change that you think is dangerous but is actually safe, the next most safe is a change that is dangerous and you know is dangerous, and least safe is one you think is safe but is actually dangerous. There is pretty much no such thing as a change you think is safe and actually is safe. There is also no such thing as a failure to change something that should be changed that is safe, because the world doesn't stay constant and past reliability doesn't actually mean something was free from errors.

Obviously danger for the sake of danger would be dumb, so we should want to make a reasonable number of known dangerous changes, which are clearly worth it, and that justify the review and testing needed to make them safe enough which will, along the way, find issues we didn't know we had too.

If instead people feel burned by this issue and shy even further away from dangerous changes the activity wouldn't justify due care and eventually we get burned by a safe change that isn't actually safe.

As mentioned, even the complete absence of changes isn't safe, since without changes old vulnerabilities, and security relevant short comings don't get addressed (e.g. I mentioned multisig, hardware wallets, but there are many others).

Not to point fingers-- especially as there was no casual relationship in this instance, but it occurs to me that in many recent posts you've advocated for radically reducing the interblock interval-- an argument that is only anything but laughable specifically because of a multitude of changes like 9049 that introduced this bug, squeezing out every last ounce of performance (and knocking the orphan rate at 10 minutes from several percent to something much smaller).  If our community culture keeps assuming that performance improvements come for free (or that the tradeoffs from load can be ignored and dealt with later) then they're going to keep coming at the expense of other considerations.

As far as concrete efforts go, it's clear that there needs to be a greater effort to go systematically through the consensus rules and make sure every part of the system has high quality testing-- even old parts (which tend to be undertested), and that as needed the system is adjusted to make higher quality testing possible. To achieve that it will probably also be necessary to have more interaction about what constitutes good testing, I'll try to talk to people more about what what we did in libsecp256k1 since I think its much closer to a gold standard. I am particular fond of 'mutation' style testing where you test the tests by introducing bugs and making sure the tests fail, but it's tricky to apply to big codebases, and is basically only useful after achieving 100% condition-decision test coverage. Here is an example of me applying that technique in the past with good results (found several serious shortcoming in the tests and found an existing long standing bug in another subsystem as a side-effect).

I think there is also an opportunity to improve the uniformity of review but I'm not quite sure how to go about doing that: E.g. checklists have a bad habit of turning people into zombies, but there are plenty of cases where well constructed ones have had profoundly positive effects. Unfortunately we still just don't have enough really strong reviewers.  That cannot be improved too quickly simply because it takes a lot of experience to become really effective. We have more contributors now than in the past though, so there is at least the potential that many of the new ones will stick around and mature into really good reviewers.

We also shouldn't lose perspective of the big things that we're getting right in other ways and the limitations of those approaches.  So for example, 9049 introduced a fault in the course of speeding up block propagation.  One alternative we've used since 2013 is implementing fast propagation externally: Doing so reduces the risk that new propagation stuff introduces bugs and also lets us try new techniques faster than would be possible if we had to upgrade the whole network.  Bitcoin Fibre has a new, radically different, ultra-super-mega-rocket science approach to propagating blocks. Fibre protocol's implementation has also been relatively chalk full of pretty serious bugs, but they haven't hurt Bitcoin in general because they're separate.  Unfortunately, we've found that with the separation few parties other than Matt's public fibre network run it, creating a massive single point of failure where if his systems are turned off block propagation speed will drop massively.  To address that, we've been creating simplfied subsets of that work, hardening them up, making them safe, and porting them to the network proper-- BIP152 (compact blocks) is an example of that.  A gap still exists (fibre has much lower latency than BIP152) and the gap is still a problem, since propagation advantages amplify mining centralization (as well as make selfish mining much more profitable)... but it's less of one.  This is just one of the ways we've all managed the tension between a security driven demand to improve performance and a security driven demand to simply and minimize risky changes. There just isn't a silver bullet.  There are lots of things we can do, and do do, but none magically solve all the problems. The most important thing is probably that we accept the scope of the challenge and face it with a serious and professional attitude, rather than pretending that something less than a super-human miracle is good enough. Smiley

26  Bitcoin / Development & Technical Discussion / Re: The duplicate input vulnerability shouldn't be forgotten on: September 23, 2018, 06:13:16 PM
Apparently, according to Luke-jr who's subscribed, Bitcoin Core still hasn't even bothered to use their announcement mailing list (specifically set up for security issues and new releases) to warn about CVE-2018-17144 yet (a security issue fixed in a new release no less!). That's pretty incompetent if you ask me.
The announcement of the new release and fix was sent to it.  You mean the guy who controls bitcoin.org isn't subscribed to it? That' s pretty ... suboptimal, if you ask me. Smiley

It's easy to point fingers and assert that the world would be better if other people acted according to your will instead of their own free will,  harder to look at your own actions and contemplate what you could to to improve things.  To risk making the same mistake myself: Instead of calling people who owe you nothing incompetent, you could offer to help...  Just a thought.  It's the advice that I've tried to follow myself, and I think it's helped improve the world a lot more than insults ever would.
27  Bitcoin / Development & Technical Discussion / Re: The duplicate input vulnerability shouldn't be forgotten on: September 23, 2018, 06:06:18 PM
and therefore doesn't have anything to do with Bitcoin,
The fact that it created real vulnerabilities that could have been used to cause massive funds loss, even for people who removed the code, suggests otherwise!  Something doesn't have to be part of consensus rules directly to create a risk for the network.

The general coding style in Bitcoin makes RCEs unlikely, or otherwise I would say that I would be unsurprised if the ALERT system didn't have one at least at some point in its history... It was fairly sloppy and received inadequate attention and testing because of its infrequent use and because only a few parties (like the Japanes government ... Sad ) could make use of it.

and I say this as one of the uh, I think, three people to have ever sent an alert.
28  Bitcoin / Development & Technical Discussion / Re: The duplicate input vulnerability shouldn't be forgotten on: September 23, 2018, 05:44:52 PM
Perhaps the Core release schedule is too fast. Even though it sometimes already feels painfully slow, there's no particular "need to ship", so it could be slowed down arbitrarily in order to quadruple the amount of testing code or whatever.

I believe slower would potentially result in less testing and not likely result in more at this point.

If we had an issue that newly introduced features were turning out to frequently have serious bugs that are discovered shortly after shipping there might be a case that it would improve the situation to delay improvements more before putting them into critical operation... but I think we've been relatively free of such issues.  The kind of issues that just will be found with a bit more time are almost all already found prior to release.

Quadrupling the amount of (useful) testing would be great.  But the way to get there may be via more speed, not less. You might drive safer if you drove a bit slower, but below some speed, you become more likely to just fall asleep while driving.

Imagine a bridge construction crew with generally good safety practices that has a rare fatal accident. Some government bureaucrat swings in and says "you're constructing too fast: it would be okay to construct slower, fill out these 1001 forms in triplicate for each action you take to prevent more problems".  In some kind of theoretical world the extra checks would help, or at least not hurt.  But for most work there is a set of optimal paces where the best work is done.  Fast enough to keep people's minds maximally engaged, slow enough that everything runs smoothly and all necessary precautions can be taken.  We wouldn't be to surprised to see that hypothetical crew's accident rate go up after a change to increase overhead in the name of making things more safe: either efforts that actually improve safety get diverted to safety theatre, or otherwise some people just tune out, assume the procedure is responsible for safety instead of themselves, and ultimately make more errors.

So I think rather, it would be good to say that things should be safer _even if_ it would be slower. This is probably what you were thinking, but it's important to recognize that "just do things slower" itself will not make things safer when already the effort isn't suffering from momentary oopses.

Directing more effort into testing has been a long term challenge for us,  in part because the art and science of testing is no less difficult than any other aspect of the system's engineering. Testing involves particular skills and aptitudes that not everyone has.

What I've found is that when you ask people who aren't skilled at testing to write more tests when they generally write are rigid, narrow scope, known response tests.    So what they do is imagine the typical input to a function (or subsystem), feed that into it it, and then make the test check for exactly the result the existing function produces.  This sort of test is of very low value:  It doesn't test extremal conditions, especially not the ones the developer(s) hadn't thought of (which are where the bugs are most likely to be),  it doesn't check logical properties so it doesn't give us any reason to think that the answer the function is currently getting is _right_, and it will trigger a failure if the behaviour changes even if the change is benign, but only for the tested input(s). They also usually only test the smallest component (e.g. a unit test instead of a system test), and so they'll can't catch issues arising from interactions.  These sorts of tests can be worse than no test in several ways: they falsely make the component look tested when it isn't, and they create tests that spew false positives as soon as you change something which both discourages improvements and encourages people to blindly update tests and miss true issues. A test that alerts on any change at all can be appropriate for normative consensus code which must not change behaviour at all, but good tests for that need to test boundary conditions and random inputs too. That kind of test isn't very appropriate for things that are okay to change.

In this case, our existing practices (even those of two years ago) would have been at least minimially adequate to have prevented the bug. But they weren't applied evenly enough.  My cursory analysis of the issue suggests that there was a three component failure: The people who reviewed the change had been pre-primed by looking at the original introduction of the duplicate tests which had a strong proof that the test was redundant. Unfortunately, later changes had made it non-redundant apparently with realizing it. People who wouldn't have been snowed by it (e.g. Suhas never saw the change at all, and since he wasn't around for PR443 he probably wouldn't have easily believed the test was redundant) just happened to miss that the change happened, and review of the change got distracted by minutia which might have diminished its effectiveness. Github, unfortunately doesn't provide good tools to help track review coverage. So this is an area where we could implement some improved process that made sure that the good things we do are done more uniformly. Doing so probably won't make anything slower.  Similarly, a more systematic effort to assure that all functionality has good tests would go a long way: New things in Bitcoin tend to be tested pretty well, but day zero functionality that never had tests to begin with isn't always.

It takes time to foster a culture where really good testing happens, especially because really good testing is not the norm in the wider world.  Many OSS and commercial projects hardly have any tests at all, and many that do hardly have good ones. (Of course, many also have good tests too... it's just far from universal.)  We've come a long way in Bitcoin-- which originally had no tests at all, and for a long time only had 'unit tests' that were nearly useless (almost entirely examples of that kind of bad testing). Realistically it'll continue to be slow going especially since "redesign everything to make it easier to test well" isn't a reasonable option, but it will continue to improve. This issue will provide a nice opportunity to nudge people's focus a bit more in that direction.

I think we can see the negative the effect of "go slower" in libsecp256k1.   We've done a lot of very innovative things with regard to testing in that sub-project, including designing the software from day one to be amenable to much stronger testing and had some very good results from doing so.  But the testing doesn't replace review, and as a result the pace in the project has become very slow-- with nice improvements sitting in PRs for years. Slow pace results in slow pace, and so less new review and testing happen too. ... and also fewer new developments that would also improve security get completed.

The relative inaccessibility of multisig and hardware wallets are probably examples where conservative development in the Bitcoin project have meaningfully reduced user security.

It's also important to keep in mind that in Bitcoin performance is a security consideration too.  If nodes don't keep well ahead of the presented load, the result is that they get shut off or never started up by users, larger miners get outsized returns,  miners centralize, attackers partition the network by resource exhausting nodes.  Perhaps in an alternative universe where it in 2013 when it was discovered that prior to 0.8 nodes would randomly reject blocks under ~>500k if the block size were permanently limited to 350k we could have operated as if performance weren't critical to the survival of the system, but that it's what happened. So, "make the software triple check everything and run really slowly" isn't much of an option either especially when you consider that the checks themselves can have bugs.
29  Bitcoin / Development & Technical Discussion / Re: The duplicate input vulnerability shouldn't be forgotten on: September 23, 2018, 05:00:55 PM
For example, right now, a huge majority of the network is running vulnerable software, but because the alert system was removed, there's no way to reach them and tell them to upgrade. We just have to pray they check certain sites. Makes no sense. Some sort of alert feature in software that handles money is a necessity, it could have been enabled by default with an option to disable it. Satoshi was so smart and practical to add that.
You do realize that you're lamenting the removal of a back door key that could be used to remotely crash nodes?   Even without the crash it inherently meant that a single potentially compromised party had the power to pop up to all users of the software potentially misleading messages-- like falsely claiming there were issues that required replacing the software with insecure alternatives. At least when people get news "from the internet" there are many competing information sources that could warn people to hold off.

No thanks.

No one who wants that power should by any account be allowed to have it.  Not if Bitcoin is to really show its value as a decenteralized system.

I doubt doubt that better notification systems can be done... things designed to prevent single bad actors from popping up malicious messages, but even there there is no reason to have such tools directly integrated into Bitcoin itself.  I'd love to see someone create a messaging system that could send message in a way that no single compromised party could send a message or block publication, that messages couldn't be targeted at small groups but had to be broadcast even with the help of network attacks, where people could veto messages before they're displayed, where messages displaying would be spread out over time so users exposed earlier could sound the alarm for a bad message... etc.  But there is no reason for such a system to be integrated into Bitcoin, it would be useful far beyond it.

But it just goes to show that for every complex problem there is a simple centralized solution and inevitably someone will argue for Bitcoin to adopt it.
30  Bitcoin / Development & Technical Discussion / Re: The duplicate input vulnerability shouldn't be forgotten on: September 23, 2018, 04:53:22 PM
Just responding to this particular fork of discussion now,

I know this is probably the last argument most people want to hear, but is this not a case where more independent implementations would result in less risk?

They would create more risk. I don't think there is any reason to doubt that this is an objective fact which has been borne out by the history.

First, failures in software are not independent. For example, when BU nodes were crashing due to xthin bugs, classic were also vulnerable to effectively the same bug even though their code was different and some triggers that would crash one wouldn't crash the other. There have been many other bugs in Bitcoin that have been straight up replicated many times, for example the vulnerable to crash through memory exhaustion due to processing loading a lot of inputs from the database concurrently: every other implementation had it to, and in some it caused a lot more damage.

Even entirely independently written software tends to have  similar faults:

Quote
By assuming independence one can obtain ultra-reliable-level estimates of reliability even though the individual versions have failure rates on the order of 10^-4. Unfortunately, the independence assumption has been rejected at the 99% confidence level in several experiments for low reliability software.

Furthermore, the independence assumption cannot ever be validated for high reliability software because of the exorbitant test times required. If one cannot assume independence then one must measure correlations. This is infeasible as well—it requires as much testing time as life-testing the system because the correlations must be in the ultra-reliable region in order for the system to be ultra-reliable. Therefore, it is not possible, within feasible amounts of testing time, to establish that design diversity achieves ultra-reliability. Consequently, design diversity can create an “illusion” of ultra-reliability without actually providing it.
(source)

More critically, the primary thing the Bitcoin system does is come to consensus:  Most conceivable bugs are more or less irrelevant so long as the network behaves consistently.  Is an nlocktime check a < or a <= check? Who cares... except if nodes differ the network can be split and funds can be taken through doublespending the reorg.  Are weird "hybrid pubkeys" allowed? Who cares... except if nodes differ the network can be split and funds can be taken through doublespending the reorg.  What happens when a transaction has a non-nonsensical signature that indicates sighash single without a matching output? Who cares... except if nodes differ the network can be split and funds can be taken through doublespending the reorg. And so on.  For the vast majority of potential bugs having multiple incompatible implementations turns a non-event into a an increasingly serious vulnerability.

Even for an issue which isn't a "who cares" like allowing inflation of the supply adding a surprise disagreement between nodes does not help matters!  The  positive effect it creates is that you might notice it after the fact faster but for that benefit you don't actually need additional full implementations, just monitoring code.  In the case of CVE-2018-17144 the sanity checks on node startup would also detect the bad block. On the negative, they just cause a consensus split and result in people being exposed to funds loss in a reorg-- which is the same kind of exposure that they'd have in a single implementation + monitoring then fixing the issue when detected, other than perhaps the reorg(s) might be longer or shorter in one case or another.

In this case ABC also recently totally reworked the detection of double spends as a part of their "canonical transaction order" changes which require that double spending checks be differed and run as a separate check after processing because transactions are required to be out of their logical casual ordering.  Yet they both failed to find this bug as part of testing that change and also failed to accidentally fix it.

Through all the history of Bitcoin, altcoin forks copying the code, other reimplementations and many other consensus bugs, some serious and some largely benign,  this is the first time someone working on another serious implementation that someone actually uses has actually found one.  I might be forgetting something, but I'm certainly not forgetting many. Kudos to awemany. In all other cases they either had the same bugs without knowing it, or accidentally fixed it creating fork risk creating vulnerability without knowing it. It also isn't like there being multiple implementations is new-- for the purpose of this point every one of the 1001 altcoins created by copying the Bitcoin code count as a separate implementation too, or at least the ones which are actively maintained do.

Having more versions also dilutes resources, spreading review and testing out across other implementations. For the most part historically the Bitcoin project itself hasn't suffered much from this: the alternatives have tended to end up just barely maintained one or two developer projects (effectively orphaning users that depended on them, -- another cost to that diversity) and so it doesn't look like they caused much meaningful dilution to the main project.  But it clearly harms alternatives, since they usually end up with just one or two active developers and seldom have extensive testing.  Node diversity also hurts security by making it much more complicated to keep confidential fixes private. When an issue hits multiple things its existence has to be shared with more people earlier, increasing the risk of leaks and coincidentally timed cover changes can give away a fix which would otherwise go without notice. In a model with competing implementations some implementers might also hope to gain financially by exploiting their competitions bugs, further increasing the risk of leaks.

Network effects appear to have the effect that in the long run one one or maybe a few versions will ultimately have enough adoption to actually matter. People tend to flock to the same implementations because they support the features they need and can get the best help from others using the same stuff. Even if the benefits didn't usually fail to materialize, they'd fail to matter through disuse.

Finally, diversity can exist in forms other than creating more probability disused, probably incompatible implementations.  The Bitcoin software project is a decentralized collaboration of dozens of regular developers.  Any one of the participants there could go and make their own implementation instead (a couple have, in addition to).  The diversity of contributors there do find and fix and prevent LOTS of bugs, but do so without introducing to the network more incompatible implementations.   That is, in fact, the reason that many people contribute there at all: to get the benefit of other people hardening the work and to avoid damaging the network with potential incompatibilities.

All this also ignores the costs of implementation diversity unrelated to reliability and security, such as the redundancy often making improvements more expensive and slow to implement.

TLDR: In Bitcoin the system as a whole is vulnerable to disruption of any popular implementation has a consensus bug: the disruption can cause financial losses even for people not running the vulnerable software by splitting consensus and causing reorgs. Implementations also tend to frequently reimplement the same bugs even in the rare case where they aren't mostly just copying (or transliterating) code, which is good news for staying consistent but means that even when consistency isn't the concern the hoped-for benefit usually will not materialize. Also network effects tend to keep diversity low regardless, which again is good for consistency, but means but bad for diversity actually doing good. To the extent that diversity can help, it does so primarily through review and monitoring which are better achieved through collaboration on a smaller number of implementations.

Cheers,
31  Bitcoin / Bitcoin Discussion / Re: Bitcoin Core 0.16.3 Released on: September 20, 2018, 06:33:00 PM
Please report the malware false positive.  False positives have also happened because there have been several public campaigns in an altcoin forum to report the bitcoin software as malware. Sad
32  Bitcoin / Development & Technical Discussion / Re: Total Number of full nodes operating. Less than 10k. on: September 10, 2018, 10:41:22 PM
Well, I should say all additional full nodes being run, would add some more redundancy. Huh When other nodes goes down and there are backup nodes, then that adds some value, right?
There are on the order of 10,000 reachable nodes. So we already have ~10,000x redundancy, adding more doesn't seem like much of a value. Also, I was mostly trying to ask about someone already running one or more nodes adding an additional one-- the additional one probably goes down if the rest due, so the addition is not much value.

There are plenty of reasons for people to run bitcoin nodes-- better security, better privacy...  but not really any gain in having a hosting party run multiple.
33  Bitcoin / Development & Technical Discussion / Re: What are the steps to take to publish the solution for blockchain scaling? on: September 09, 2018, 02:33:49 PM
The block limit was purposely implemented for spam protection
This claim has is just one of the pieces of unsupported nonsense pumped out by paid shills, it's so sad that so many people have just accepted it because of the number of sock accounts that have constantly repeated it. Sad

As far as the thread goes, many people have made similar claims before. Their proposals have inevitably turned out to banal rehashings of more or less obvious ideas that the Bitcoin community considered and rejected as far back as 2010 because they don't respect some critical element of what makes Bitcoin Bitcoin that the proposer themselves wasn't aware of as a result of so much bitcoin promotional material giving an incomplete view of the system. The most common things I've seen such proposals do is introduce unfounded third party trust (usually in miners, sometimes in other random node operators) and in doing so eliminate the system's trustlessness.

Regardless, instead of saying that you have an idea that achieves some grand effect, you should simply publish the idea.  Crowing about it just burns your and everyone elses time without making the world a better place or having an opportunity to improve your understanding.

34  Bitcoin / Development & Technical Discussion / Re: Total Number of full nodes operating. Less than 10k. on: September 08, 2018, 11:08:41 PM
If I help to fund someone to run a full node on my behalf, it still adds value to the community.
What value are you referring to specifically? I'm especially interested in what value you see being provided beyond the one or more that they're already running (perhaps for someone else)?
35  Bitcoin / Development & Technical Discussion / Re: Total Number of full nodes operating. Less than 10k. on: September 08, 2018, 07:36:24 AM
We cannot always make citations on the white paper because Bitcoin has already developed further beyond "Satoshi's vision".
on this point you can, in particular the white paper pointed out that nodes should be run for independent security (last sentence of section Cool and that mining would be done by specialized hardware...  moreover, there have been a number of altcoins to make radical design changes along the lines of what aliashraf has been promoting and yet they do not see an increase in nodes.

Mining centralization is certainly a problem, but that doesn't make it the origin of every problem nor does it mean that any particular proposed solution would improve it or anything else for that matter.

Quote
It will make miners run their own nodes,
No it won't. If it did it wouldn't have a snowballs chance in hell of getting significant usage. It lets pooling for payments be operated separately from the consensus -- which does mean that you could still pool payments in a fairly traditional way while running off your own node, but it doesn't require you run your own (nor could it really).  After talking to multiple large miners (tens of MW of mining) who have _never_ run a Bitcoin wallet (they just have their pools pay directly to an exchange account) I think the potential of this is easily overstated, but it's just the right (better even) thing to do.  Combined with other efforts to lower node operational costs we might see some more independence there but I wouldn't hold my breath for a rapid change. Smiley
36  Bitcoin / Development & Technical Discussion / Re: Total Number of full nodes operating. Less than 10k. on: September 07, 2018, 04:01:25 PM
Aliashraf,

We all understand that you are angry that other people don't want to spend their time radically redesigning Bitcoin in accordance with the "fixes" you demand which others have responded to and concluded your technical arguments are wrong and confused. We all also understand that you disagree with these analysis.

In this thread posts about these complaints are offtopic, they have little to do with the subject of the thread beyond some broad conjecture that you make that if there were less cause for pooling there would be a lot more nodes, a weak argument but one that could be made without all the vitriol. You're turning the thread into a series of abuses that will discourage productive on-topic discussion. Cut it out.
37  Bitcoin / Development & Technical Discussion / Re: Total Number of full nodes operating. Less than 10k. on: September 06, 2018, 03:02:27 PM
Paying someone to run nodes (or running one yourself on a third party controlled VPS service like amazon or digital ocean, for that matter) wouldn't serve much of a purpose.

I beg to differ. You will still have those people with enough resources and bandwidth to keep it decentralized. I want to add to that group, by giving people with local bandwidth and resource issues a platform to contribute to this important service.

These people would not necessarily be running a full node, because of these problems, but they can now contribute financially to run a full node, by just funding the people who can do this.

I have some friends in some rural areas with very bad internet and they desperately want to contribute, but the local infrastructure issues, stop them from doing that.

This does not mean that we would have only 1 organization doing this in 1 location. <It might be Bitcoin merchants in several different locations, with better bandwidth> 

Your position assumes there is a contribution created by running more nodes on Amazon. There isn't, for the most part.  Effectively, paying amazon to run more nodes is just funding a benign sybil attack by Amazon on the network.

I'm well aware that there are parts of the world where it is prohibitively expensive to run a node-- thats part of the reason why efforts like the satellite broadcast of Bitcoin exist.  But just because there is a problem doesn't mean that any particular alternative is a solution.

Just because someone takes on a cost does not mean they are making a useful contribution.  The first node on amazon was a useful contribution, perhaps you could argue that one in each availability zone is a contribution, but we're vastly beyond that, and adding more nodes on amazon mostly just improves the ability to monitor traffic without being noticed for amazon and sybil attackers who purchase their services.
38  Bitcoin / Development & Technical Discussion / Re: Total Number of full nodes operating. Less than 10k. on: September 05, 2018, 07:35:04 AM
Why can we not have services where people make donations to a global company with enough resources to host these full nodes for them. <Not virtual servers in the cloud>, like we have with Cloud mining.
This completely misses the point.  If one doesn't care about decentralization the whole of the bitcoin system can run on a _single_ node there isn't need for multiple nodes (much less many) but for decentralization purposes.

Paying someone to run nodes (or running one yourself on a third party controlled VPS service like amazon or digital ocean, for that matter) wouldn't serve much of a purpose.
39  Bitcoin / Development & Technical Discussion / Re: Total Number of full nodes operating. Less than 10k. on: September 02, 2018, 07:29:41 PM
We need more companies using Bitcoin and to increase confidence in the project it would be natural for them to opt for a full node as well.
That is something I believed in, say, 2011 -- even the whitepaper says that merchants should run a node even with lite clients available...

But the norm for companies these days, especially small companies and startups, is aggressive outsourcing of all technical infrastructure even in their core domain.  So for example, there have been many bitcoin exchanges that don't run their own nodes, but outsource their transaction handling to third parties, and those third parties don't even operate their own equipment, but rent VPS service by the hour from companies like amazon.  As a result, it turns out that companies, even specialist "bitcoin companies" can't be counted on to operate nodes even where a simple risk analysis would say it was in their best interest to do so.  

That was a period in time before mining centralisation and ASICs so there would've been thousands of people with small mining rigs.
Sorry, that is simply untrue. By mid 2011 almost everyone mining was using pools and not running nodes to mine.

Quote
That's why ETH has more nodes of that nature than BTC these days.
Ethereum has _vastly_ fewer "nodes" than Bitcoin. You've been fed misinformation that comes from comparing the total number of ethereum nodes (listening or not) to just the listening bitcoin nodes.  Right now the numbers are 15,277 total ethereum vs 83,096 bitcoin nodes.  Moreover, the common ethereum configuration has security properties a lot more like Bitcoin SPV: they don't validate the history when they join they just blindly trust the hash power for it.
40  Bitcoin / Development & Technical Discussion / Re: Total Number of full nodes operating. Less than 10k. on: September 02, 2018, 02:10:37 AM
The number has definitively increased a lot,
Only after falling a lot.  E.g. on Jan 3rd 2012 there were over 16500 listening nodes tracked by sipa's seeder.

Pages: « 1 [2] 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 ... 239 »
Sponsored by , a Bitcoin-accepting VPN.
Powered by MySQL Powered by PHP Powered by SMF 1.1.19 | SMF © 2006-2009, Simple Machines Valid XHTML 1.0! Valid CSS!