Bitcoin Forum
April 19, 2024, 08:33:07 PM *
News: Latest Bitcoin Core release: 26.0 [Torrent]
 
   Home   Help Search Login Register More  
Pages: « 1 [2] 3 4 5 »  All
  Print  
Author Topic: The duplicate input vulnerability shouldn't be forgotten  (Read 2271 times)
Quickseller
Copper Member
Legendary
*
Offline Offline

Activity: 2870
Merit: 2298


View Profile
September 23, 2018, 08:00:59 AM
 #21

Multiple implementations means more lines of code and hence more bugs. It will cause frequent chain splits and encourages a new range of sybil attacks.

This comes down to one's central belief as to what 'Bitcoin" is.....is Bitcoin  a collection of consensus rules? Or is it a software maintained by a very small group of people?

If your argument is that Bitcoin is a software run by a small group of people (eg. those who decide what gets published every time there is an upgrade), then Bitcoin is much more centralized than most central banks.

This also comes down to another important question as to who exactly should be running a full node? It is my belief that businesses and those who are going to receive bitcoin prior to giving anything of value to their trading partners are those who should run a full node. Anyone who sends valuable property to someone prior to receiving bitcoin is already at risk of outright not receiving bitcoin in return, so the value of running a fully validating node is minimal. Further, it is the responsibility of those who are running a full node to personally ensure the software they are running is going to preform as they believe it will, and to not rely on third parties for this.
1713558787
Hero Member
*
Offline Offline

Posts: 1713558787

View Profile Personal Message (Offline)

Ignore
1713558787
Reply with quote  #2

1713558787
Report to moderator
"With e-currency based on cryptographic proof, without the need to trust a third party middleman, money can be secure and transactions effortless." -- Satoshi
Advertised sites are not endorsed by the Bitcoin Forum. They may be unsafe, untrustworthy, or illegal in your jurisdiction.
1713558787
Hero Member
*
Offline Offline

Posts: 1713558787

View Profile Personal Message (Offline)

Ignore
1713558787
Reply with quote  #2

1713558787
Report to moderator
1713558787
Hero Member
*
Offline Offline

Posts: 1713558787

View Profile Personal Message (Offline)

Ignore
1713558787
Reply with quote  #2

1713558787
Report to moderator
1713558787
Hero Member
*
Offline Offline

Posts: 1713558787

View Profile Personal Message (Offline)

Ignore
1713558787
Reply with quote  #2

1713558787
Report to moderator
aliashraf
Legendary
*
Offline Offline

Activity: 1456
Merit: 1174

Always remember the cause!


View Profile WWW
September 23, 2018, 09:55:55 AM
 #22

Multiple implementations means more lines of code and hence more bugs. It will cause frequent chain splits and encourages a new range of sybil attacks.

This comes down to one's central belief as to what 'Bitcoin" is.....is Bitcoin  a collection of consensus rules? Or is it a software maintained by a very small group of people?

If your argument is that Bitcoin is a software run by a small group of people (eg. those who decide what gets published every time there is an upgrade), then Bitcoin is much more centralized than most central banks.

This also comes down to another important question as to who exactly should be running a full node? It is my belief that businesses and those who are going to receive bitcoin prior to giving anything of value to their trading partners are those who should run a full node. Anyone who sends valuable property to someone prior to receiving bitcoin is already at risk of outright not receiving bitcoin in return, so the value of running a fully validating node is minimal. Further, it is the responsibility of those who are running a full node to personally ensure the software they are running is going to preform as they believe it will, and to not rely on third parties for this.
I agree both problems you are mentioning are very crucial but I afraid they are not relevant in this context.

Definitively bitcoin is a protocol and as a protocol it should be implemented somehow. One might buy a closed source or code down to the metal a proprietary software and participate in the protocol but when people use an open source implementation for this, things become a bit more complicated. In other cases it would be users own responsibility  to take care of their safety but for open-source software it is community's credit that is being spent and ensures users about their security.  

Obviously, it would be much safer for a community to take care of one implementation with fewer lines of codes.

As I understand, your concern is governance, but it is another topic (an unsolved one, I suppose) and there is no guarantee we could have an optimized solution for both immunity to software bugs and governance at the same time in one single package, at least right now.

As of your assertions about the limited value of running a full node, I do agree again, but when it comes to an implementation used by a significant number of users, this value accumulates to sensitive levels and people may lose both considerable amounts of money and (more importantly) their faith in the whole ecosystem.

DooMAD
Legendary
*
Offline Offline

Activity: 3766
Merit: 3099


Leave no FUD unchallenged


View Profile
September 23, 2018, 12:17:14 PM
 #23

Obviously, it would be much safer for a community to take care of one implementation with fewer lines of codes.

I think I can guess where this is heading.     Roll Eyes



That being said, having multiple implementations is good for the individual who runs multiple nodes with different implementations. With multiple nodes each with different software, attacks exploiting critical bugs lets them know if an attack is going on. If everyone ran multiple nodes with different implementations, then multiple implementations are fine. The network would not shutdown and there wouldn't be any network partitioning. But not everyone is going to do that.

Perhaps not everyone would need to.  If we adapt theymos' idea that larger Bitcoin companies should effectively place a small percentage of their employees on secondment with Core, what if instead they ran a Core node but also maintained a second node with their own business-oriented implementation?  Something that might focus more on features for merchants, for example.  Then they can report any inconsistencies and issues like would be far less likely to go unnoticed for 18 months?  I'm unsure how many companies it would take for the idea to be effective, but if it worked, that would help create a decent safety net.



.
.HUGE.
▄██████████▄▄
▄█████████████████▄
▄█████████████████████▄
▄███████████████████████▄
▄█████████████████████████▄
███████▌██▌▐██▐██▐████▄███
████▐██▐████▌██▌██▌██▌██
█████▀███▀███▀▐██▐██▐█████

▀█████████████████████████▀

▀███████████████████████▀

▀█████████████████████▀

▀█████████████████▀

▀██████████▀▀
█▀▀▀▀











█▄▄▄▄
▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀
.
CASINSPORTSBOOK
▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄
▀▀▀▀█











▄▄▄▄█
Cøbra
Bitcoin.org domain administrator
Full Member
***
Offline Offline

Activity: 123
Merit: 469


View Profile WWW
September 23, 2018, 04:49:14 PM
Merited by Quickseller (3)
 #24

I think the problem is that Core has too many developers, but not enough review/testing, and there's simply too much going on at once. When a lot of the changes are optimizations, refactoring, and the occasional networking/consensus touching stuff, it's inevitable you will get situations where a scary bug like this slips through due to all the emergent complexity. It's not like all these guys are doing GUI features. I like the idea of a fork with only necessary consensus changes (as these are very well tested). There should be two choices between reference implementations; "safe but slow" and "slightly more risky but fast". The more I think about it, the more a LTS version with only consensus changes and critical bug fixes makes a lot of sense in the context of Bitcoin.

To be honest, some really strange decisions get made at Bitcoin Core. For example, right now, a huge majority of the network is running vulnerable software, but because the alert system was removed, there's no way to reach them and tell them to upgrade. We just have to pray they check certain sites. Makes no sense. Some sort of alert feature in software that handles money is a necessity, it could have been enabled by default with an option to disable it. Satoshi was so smart and practical to add that.
gmaxwell
Moderator
Legendary
*
expert
Offline Offline

Activity: 4158
Merit: 8382



View Profile WWW
September 23, 2018, 04:53:22 PM
Last edit: September 23, 2018, 06:38:57 PM by gmaxwell
Merited by Foxpup (20), theymos_away (10), suchmoon (4), DooMAD (2), bones261 (2), ABCbits (1)
 #25

Just responding to this particular fork of discussion now,

I know this is probably the last argument most people want to hear, but is this not a case where more independent implementations would result in less risk?

They would create more risk. I don't think there is any reason to doubt that this is an objective fact which has been borne out by the history.

First, failures in software are not independent. For example, when BU nodes were crashing due to xthin bugs, classic were also vulnerable to effectively the same bug even though their code was different and some triggers that would crash one wouldn't crash the other. There have been many other bugs in Bitcoin that have been straight up replicated many times, for example the vulnerable to crash through memory exhaustion due to processing loading a lot of inputs from the database concurrently: every other implementation had it to, and in some it caused a lot more damage.

Even entirely independently written software tends to have  similar faults:

Quote
By assuming independence one can obtain ultra-reliable-level estimates of reliability even though the individual versions have failure rates on the order of 10^-4. Unfortunately, the independence assumption has been rejected at the 99% confidence level in several experiments for low reliability software.

Furthermore, the independence assumption cannot ever be validated for high reliability software because of the exorbitant test times required. If one cannot assume independence then one must measure correlations. This is infeasible as well—it requires as much testing time as life-testing the system because the correlations must be in the ultra-reliable region in order for the system to be ultra-reliable. Therefore, it is not possible, within feasible amounts of testing time, to establish that design diversity achieves ultra-reliability. Consequently, design diversity can create an “illusion” of ultra-reliability without actually providing it.
(source)

More critically, the primary thing the Bitcoin system does is come to consensus:  Most conceivable bugs are more or less irrelevant so long as the network behaves consistently.  Is an nlocktime check a < or a <= check? Who cares... except if nodes differ the network can be split and funds can be taken through doublespending the reorg.  Are weird "hybrid pubkeys" allowed? Who cares... except if nodes differ the network can be split and funds can be taken through doublespending the reorg.  What happens when a transaction has a non-nonsensical signature that indicates sighash single without a matching output? Who cares... except if nodes differ the network can be split and funds can be taken through doublespending the reorg. And so on.  For the vast majority of potential bugs having multiple incompatible implementations turns a non-event into a an increasingly serious vulnerability.

Even for an issue which isn't a "who cares" like allowing inflation of the supply adding a surprise disagreement between nodes does not help matters!  The  positive effect it creates is that you might notice it after the fact faster but for that benefit you don't actually need additional full implementations, just monitoring code.  In the case of CVE-2018-17144 the sanity checks on node startup would also detect the bad block. On the negative, they just cause a consensus split and result in people being exposed to funds loss in a reorg-- which is the same kind of exposure that they'd have in a single implementation + monitoring then fixing the issue when detected, other than perhaps the reorg(s) might be longer or shorter in one case or another.

In this case ABC also recently totally reworked the detection of double spends as a part of their "canonical transaction order" changes which require that double spending checks be differed and run as a separate check after processing because transactions are required to be out of their logical casual ordering.  Yet they both failed to find this bug as part of testing that change and also failed to accidentally fix it.

Through all the history of Bitcoin, altcoin forks copying the code, other reimplementations and many other consensus bugs, some serious and some largely benign,  this is the first time someone working on another serious implementation that someone actually uses has actually found one.  I might be forgetting something, but I'm certainly not forgetting many. Kudos to awemany. In all other cases they either had the same bugs without knowing it, or accidentally fixed it creating fork risk creating vulnerability without knowing it. It also isn't like there being multiple implementations is new-- for the purpose of this point every one of the 1001 altcoins created by copying the Bitcoin code count as a separate implementation too, or at least the ones which are actively maintained do.

Having more versions also dilutes resources, spreading review and testing out across other implementations. For the most part historically the Bitcoin project itself hasn't suffered much from this: the alternatives have tended to end up just barely maintained one or two developer projects (effectively orphaning users that depended on them, -- another cost to that diversity) and so it doesn't look like they caused much meaningful dilution to the main project.  But it clearly harms alternatives, since they usually end up with just one or two active developers and seldom have extensive testing.  Node diversity also hurts security by making it much more complicated to keep confidential fixes private. When an issue hits multiple things its existence has to be shared with more people earlier, increasing the risk of leaks and coincidentally timed cover changes can give away a fix which would otherwise go without notice. In a model with competing implementations some implementers might also hope to gain financially by exploiting their competitions bugs, further increasing the risk of leaks.

Network effects appear to have the effect that in the long run one one or maybe a few versions will ultimately have enough adoption to actually matter. People tend to flock to the same implementations because they support the features they need and can get the best help from others using the same stuff. Even if the benefits didn't usually fail to materialize, they'd fail to matter through disuse.

Finally, diversity can exist in forms other than creating more probability disused, probably incompatible implementations.  The Bitcoin software project is a decentralized collaboration of dozens of regular developers.  Any one of the participants there could go and make their own implementation instead (a couple have, in addition to).  The diversity of contributors there do find and fix and prevent LOTS of bugs, but do so without introducing to the network more incompatible implementations.   That is, in fact, the reason that many people contribute there at all: to get the benefit of other people hardening the work and to avoid damaging the network with potential incompatibilities.

All this also ignores the costs of implementation diversity unrelated to reliability and security, such as the redundancy often making improvements more expensive and slow to implement.

TLDR: In Bitcoin the system as a whole is vulnerable to disruption of any popular implementation has a consensus bug: the disruption can cause financial losses even for people not running the vulnerable software by splitting consensus and causing reorgs. Implementations also tend to frequently reimplement the same bugs even in the rare case where they aren't mostly just copying (or transliterating) code, which is good news for staying consistent but means that even when consistency isn't the concern the hoped-for benefit usually will not materialize. Also network effects tend to keep diversity low regardless, which again is good for consistency, but means but bad for diversity actually doing good. To the extent that diversity can help, it does so primarily through review and monitoring which are better achieved through collaboration on a smaller number of implementations.

Cheers,
gmaxwell
Moderator
Legendary
*
expert
Offline Offline

Activity: 4158
Merit: 8382



View Profile WWW
September 23, 2018, 05:00:55 PM
Last edit: September 23, 2018, 06:48:42 PM by gmaxwell
Merited by Foxpup (4), JayJuanGee (1), aliashraf (1)
 #26

For example, right now, a huge majority of the network is running vulnerable software, but because the alert system was removed, there's no way to reach them and tell them to upgrade. We just have to pray they check certain sites. Makes no sense. Some sort of alert feature in software that handles money is a necessity, it could have been enabled by default with an option to disable it. Satoshi was so smart and practical to add that.
You do realize that you're lamenting the removal of a back door key that could be used to remotely crash nodes?   Even without the crash it inherently meant that a single potentially compromised party had the power to pop up to all users of the software potentially misleading messages-- like falsely claiming there were issues that required replacing the software with insecure alternatives. At least when people get news "from the internet" there are many competing information sources that could warn people to hold off.

No thanks.

No one who wants that power should by any account be allowed to have it.  Not if Bitcoin is to really show its value as a decenteralized system.

I doubt doubt that better notification systems can be done... things designed to prevent single bad actors from popping up malicious messages, but even there there is no reason to have such tools directly integrated into Bitcoin itself.  I'd love to see someone create a messaging system that could send message in a way that no single compromised party could send a message or block publication, that messages couldn't be targeted at small groups but had to be broadcast even with the help of network attacks, where people could veto messages before they're displayed, where messages displaying would be spread out over time so users exposed earlier could sound the alarm for a bad message... etc.  But there is no reason for such a system to be integrated into Bitcoin, it would be useful far beyond it.

But it just goes to show that for every complex problem there is a simple centralized solution and inevitably someone will argue for Bitcoin to adopt it.
gmaxwell
Moderator
Legendary
*
expert
Offline Offline

Activity: 4158
Merit: 8382



View Profile WWW
September 23, 2018, 05:44:52 PM
Last edit: September 23, 2018, 08:33:09 PM by gmaxwell
Merited by Foxpup (20), theymos (10), ABCbits (1)
 #27

Perhaps the Core release schedule is too fast. Even though it sometimes already feels painfully slow, there's no particular "need to ship", so it could be slowed down arbitrarily in order to quadruple the amount of testing code or whatever.

I believe slower would potentially result in less testing and not likely result in more at this point.

If we had an issue that newly introduced features were turning out to frequently have serious bugs that are discovered shortly after shipping there might be a case that it would improve the situation to delay improvements more before putting them into critical operation... but I think we've been relatively free of such issues.  The kind of issues that just will be found with a bit more time are almost all already found prior to release.

Quadrupling the amount of (useful) testing would be great.  But the way to get there may be via more speed, not less. You might drive safer if you drove a bit slower, but below some speed, you become more likely to just fall asleep while driving.

Imagine a bridge construction crew with generally good safety practices that has a rare fatal accident. Some government bureaucrat swings in and says "you're constructing too fast: it would be okay to construct slower, fill out these 1001 forms in triplicate for each action you take to prevent more problems".  In some kind of theoretical world the extra checks would help, or at least not hurt.  But for most work there is a set of optimal paces where the best work is done.  Fast enough to keep people's minds maximally engaged, slow enough that everything runs smoothly and all necessary precautions can be taken.  We wouldn't be to surprised to see that hypothetical crew's accident rate go up after a change to increase overhead in the name of making things more safe: either efforts that actually improve safety get diverted to safety theatre, or otherwise some people just tune out, assume the procedure is responsible for safety instead of themselves, and ultimately make more errors.

So I think rather, it would be good to say that things should be safer _even if_ it would be slower. This is probably what you were thinking, but it's important to recognize that "just do things slower" itself will not make things safer when already the effort isn't suffering from momentary oopses.

Directing more effort into testing has been a long term challenge for us,  in part because the art and science of testing is no less difficult than any other aspect of the system's engineering. Testing involves particular skills and aptitudes that not everyone has.

What I've found is that when you ask people who aren't skilled at testing to write more tests when they generally write are rigid, narrow scope, known response tests.    So what they do is imagine the typical input to a function (or subsystem), feed that into it it, and then make the test check for exactly the result the existing function produces.  This sort of test is of very low value:  It doesn't test extremal conditions, especially not the ones the developer(s) hadn't thought of (which are where the bugs are most likely to be),  it doesn't check logical properties so it doesn't give us any reason to think that the answer the function is currently getting is _right_, and it will trigger a failure if the behaviour changes even if the change is benign, but only for the tested input(s). They also usually only test the smallest component (e.g. a unit test instead of a system test), and so they'll can't catch issues arising from interactions.  These sorts of tests can be worse than no test in several ways: they falsely make the component look tested when it isn't, and they create tests that spew false positives as soon as you change something which both discourages improvements and encourages people to blindly update tests and miss true issues. A test that alerts on any change at all can be appropriate for normative consensus code which must not change behaviour at all, but good tests for that need to test boundary conditions and random inputs too. That kind of test isn't very appropriate for things that are okay to change.

In this case, our existing practices (even those of two years ago) would have been at least minimially adequate to have prevented the bug. But they weren't applied evenly enough.  My cursory analysis of the issue suggests that there was a three component failure: The people who reviewed the change had been pre-primed by looking at the original introduction of the duplicate tests which had a strong proof that the test was redundant. Unfortunately, later changes had made it non-redundant apparently with realizing it. People who wouldn't have been snowed by it (e.g. Suhas never saw the change at all, and since he wasn't around for PR443 he probably wouldn't have easily believed the test was redundant) just happened to miss that the change happened, and review of the change got distracted by minutia which might have diminished its effectiveness. Github, unfortunately doesn't provide good tools to help track review coverage. So this is an area where we could implement some improved process that made sure that the good things we do are done more uniformly. Doing so probably won't make anything slower.  Similarly, a more systematic effort to assure that all functionality has good tests would go a long way: New things in Bitcoin tend to be tested pretty well, but day zero functionality that never had tests to begin with isn't always.

It takes time to foster a culture where really good testing happens, especially because really good testing is not the norm in the wider world.  Many OSS and commercial projects hardly have any tests at all, and many that do hardly have good ones. (Of course, many also have good tests too... it's just far from universal.)  We've come a long way in Bitcoin-- which originally had no tests at all, and for a long time only had 'unit tests' that were nearly useless (almost entirely examples of that kind of bad testing). Realistically it'll continue to be slow going especially since "redesign everything to make it easier to test well" isn't a reasonable option, but it will continue to improve. This issue will provide a nice opportunity to nudge people's focus a bit more in that direction.

I think we can see the negative the effect of "go slower" in libsecp256k1.   We've done a lot of very innovative things with regard to testing in that sub-project, including designing the software from day one to be amenable to much stronger testing and had some very good results from doing so.  But the testing doesn't replace review, and as a result the pace in the project has become very slow-- with nice improvements sitting in PRs for years. Slow pace results in slow pace, and so less new review and testing happen too. ... and also fewer new developments that would also improve security get completed.

The relative inaccessibility of multisig and hardware wallets are probably examples where conservative development in the Bitcoin project have meaningfully reduced user security.

It's also important to keep in mind that in Bitcoin performance is a security consideration too.  If nodes don't keep well ahead of the presented load, the result is that they get shut off or never started up by users, larger miners get outsized returns,  miners centralize, attackers partition the network by resource exhausting nodes.  Perhaps in an alternative universe where it in 2013 when it was discovered that prior to 0.8 nodes would randomly reject blocks under ~>500k if the block size were permanently limited to 350k we could have operated as if performance weren't critical to the survival of the system, but that it's what happened. So, "make the software triple check everything and run really slowly" isn't much of an option either especially when you consider that the checks themselves can have bugs.
Cøbra
Bitcoin.org domain administrator
Full Member
***
Offline Offline

Activity: 123
Merit: 469


View Profile WWW
September 23, 2018, 05:52:11 PM
Merited by Quickseller (2), ABCbits (1)
 #28

For example, right now, a huge majority of the network is running vulnerable software, but because the alert system was removed, there's no way to reach them and tell them to upgrade. We just have to pray they check certain sites. Makes no sense. Some sort of alert feature in software that handles money is a necessity, it could have been enabled by default with an option to disable it. Satoshi was so smart and practical to add that.
You do realize that you're lamenting the removal of a back door key that could be used to remotely crash nodes?   Even without the crash it inherently meant that a single potentially compromised party had the power to pop up to all users of the software potentially misleading messages-- like falsely claiming there were issues that required replacing the software with insecure alternatives.

No thanks.

No one who wants that power should by any account be allowed to have it.  Not if Bitcoin is to really show its value as a decenteralized system.

But it just goes to show that for every complex problem there is a simple centralized solution and inevitably someone will argue for Bitcoin to adopt it.

Bitcoin is the consensus rules and protocol, which Bitcoin Core implements, but Bitcoin is not synonymous with everything in Bitcoin Core. The alert system was never part of the consensus rules, and therefore doesn't have anything to do with Bitcoin, so being excessively concerned with decentralization makes no sense in this context.

Maybe the previous implementation of an alert system was flawed, with the crashing bugs, and the alert key being controlled by dubious people, but this doesn't make alerts in and of themselves a terrible idea. The unlikely event of the alert key being abused are overcome by having a "final" alert broadcast, any harm an attacker could do is minimal. The benefits far outweigh the risks here. But somehow allowing for thousands of users to unknowingly be put into situations where they're running insecure financial software is acceptable for you so long as you believe it improves some vague notion of decentralization (it doesn't). Let's be glad this bug wasn't something that could be exploited to hack into nodes remotely and steal funds, like a buffer overflow or use-after-free.
gmaxwell
Moderator
Legendary
*
expert
Offline Offline

Activity: 4158
Merit: 8382



View Profile WWW
September 23, 2018, 06:06:18 PM
Last edit: September 23, 2018, 06:35:57 PM by gmaxwell
Merited by Foxpup (2)
 #29

and therefore doesn't have anything to do with Bitcoin,
The fact that it created real vulnerabilities that could have been used to cause massive funds loss, even for people who removed the code, suggests otherwise!  Something doesn't have to be part of consensus rules directly to create a risk for the network.

The general coding style in Bitcoin makes RCEs unlikely, or otherwise I would say that I would be unsurprised if the ALERT system didn't have one at least at some point in its history... It was fairly sloppy and received inadequate attention and testing because of its infrequent use and because only a few parties (like the Japanes government ... Sad ) could make use of it.

and I say this as one of the uh, I think, three people to have ever sent an alert.
Cøbra
Bitcoin.org domain administrator
Full Member
***
Offline Offline

Activity: 123
Merit: 469


View Profile WWW
September 23, 2018, 06:08:15 PM
 #30

Quote
In an effort to increase communications, we are launching an opt-in, announcement-only mailing-list for users of Bitcoin Core to receive notifications of security issues and new releases.

https://bitcoincore.org/en/2016/03/15/announcement-list/

Apparently, according to Luke-jr who's subscribed, Bitcoin Core still hasn't even bothered to use their announcement mailing list (specifically set up for security issues and new releases) to warn about CVE-2018-17144 yet (a security issue fixed in a new release no less!). That's pretty incompetent if you ask me.

https://twitter.com/LukeDashjr/status/1043917007303966727
gmaxwell
Moderator
Legendary
*
expert
Offline Offline

Activity: 4158
Merit: 8382



View Profile WWW
September 23, 2018, 06:13:16 PM
Merited by Foxpup (2), JayJuanGee (1), BobLawblaw (1)
 #31

Apparently, according to Luke-jr who's subscribed, Bitcoin Core still hasn't even bothered to use their announcement mailing list (specifically set up for security issues and new releases) to warn about CVE-2018-17144 yet (a security issue fixed in a new release no less!). That's pretty incompetent if you ask me.
The announcement of the new release and fix was sent to it.  You mean the guy who controls bitcoin.org isn't subscribed to it? That' s pretty ... suboptimal, if you ask me. Smiley

It's easy to point fingers and assert that the world would be better if other people acted according to your will instead of their own free will,  harder to look at your own actions and contemplate what you could to to improve things.  To risk making the same mistake myself: Instead of calling people who owe you nothing incompetent, you could offer to help...  Just a thought.  It's the advice that I've tried to follow myself, and I think it's helped improve the world a lot more than insults ever would.
aliashraf
Legendary
*
Offline Offline

Activity: 1456
Merit: 1174

Always remember the cause!


View Profile WWW
September 23, 2018, 07:09:48 PM
Merited by ABCbits (1)
 #32

I suggest we are done with both multiple competing implementations and alert system. Aren't we?

If yes, it is time to ask the critical questions once again:

What is there to be learned from CVE-2018-17144 incident? What measures should be taken hereafter?

I'm not used to it but exceptionally, I agree with Greg Maxwell here, multiple implementations is a stupid idea and removal of alert system in bitcoin was a correct decision, imo,  but I think it is more about what is right and what to do rather than rejecting false ideas and pointing out what should not be done.

 
Cøbra
Bitcoin.org domain administrator
Full Member
***
Offline Offline

Activity: 123
Merit: 469


View Profile WWW
September 23, 2018, 07:16:22 PM
Merited by pawel7777 (2)
 #33

It's easy to point fingers and assert that the world would be better if other people acted according to your will instead of their own free will,  harder to look at your own actions and contemplate what you could to to improve things.  To risk making the same mistake myself: Instead of calling people who owe you nothing incompetent, you could offer to help...  Just a thought.  It's the advice that I've tried to follow myself, and I think it's helped improve the world a lot more than insults ever would.

I think this discussion is getting away from the general topic, but you were recently someone who would attack other development teams trying to "improve the world" in their own way with even more harsh terms and toxic insults. I don't think polite discourse is your strong suit either when you are known for pointing fingers at people that fork your project.

What I seem to be getting from your posts is that mostly every change is a consensus risk, and that development should move faster or at the same pace, but not slower. You claim that moving slower would potentially result in less testing, which makes no sense. You can simply have a feature or optimization on a test branch, encourage users to mess with it (by actually interacting with the feature, as awemany was doing), possibly even with some sort of bounty scheme, and once sufficient time has passed and lots of different people and companies in the industry have tried to break it, you merge it in. I can't fathom how moving slower wouldn't help here.
gmaxwell
Moderator
Legendary
*
expert
Offline Offline

Activity: 4158
Merit: 8382



View Profile WWW
September 23, 2018, 07:50:05 PM
Last edit: September 23, 2018, 08:47:26 PM by gmaxwell
Merited by Foxpup (15), suchmoon (4), ABCbits (1)
 #34

but I think it is more about what is right and what to do rather than rejecting false ideas and pointing out what should not be done.
I think I did speak to it some.

More directly: Rather than shy from danger we must continue to embrace it and manage it.

The safest change is a change that you think is dangerous but is actually safe, the next most safe is a change that is dangerous and you know is dangerous, and least safe is one you think is safe but is actually dangerous. There is pretty much no such thing as a change you think is safe and actually is safe. There is also no such thing as a failure to change something that should be changed that is safe, because the world doesn't stay constant and past reliability doesn't actually mean something was free from errors.

Obviously danger for the sake of danger would be dumb, so we should want to make a reasonable number of known dangerous changes, which are clearly worth it, and that justify the review and testing needed to make them safe enough which will, along the way, find issues we didn't know we had too.

If instead people feel burned by this issue and shy even further away from dangerous changes the activity wouldn't justify due care and eventually we get burned by a safe change that isn't actually safe.

As mentioned, even the complete absence of changes isn't safe, since without changes old vulnerabilities, and security relevant short comings don't get addressed (e.g. I mentioned multisig, hardware wallets, but there are many others).

Not to point fingers-- especially as there was no casual relationship in this instance, but it occurs to me that in many recent posts you've advocated for radically reducing the interblock interval-- an argument that is only anything but laughable specifically because of a multitude of changes like 9049 that introduced this bug, squeezing out every last ounce of performance (and knocking the orphan rate at 10 minutes from several percent to something much smaller).  If our community culture keeps assuming that performance improvements come for free (or that the tradeoffs from load can be ignored and dealt with later) then they're going to keep coming at the expense of other considerations.

As far as concrete efforts go, it's clear that there needs to be a greater effort to go systematically through the consensus rules and make sure every part of the system has high quality testing-- even old parts (which tend to be undertested), and that as needed the system is adjusted to make higher quality testing possible. To achieve that it will probably also be necessary to have more interaction about what constitutes good testing, I'll try to talk to people more about what what we did in libsecp256k1 since I think its much closer to a gold standard. I am particular fond of 'mutation' style testing where you test the tests by introducing bugs and making sure the tests fail, but it's tricky to apply to big codebases, and is basically only useful after achieving 100% condition-decision test coverage. Here is an example of me applying that technique in the past with good results (found several serious shortcoming in the tests and found an existing long standing bug in another subsystem as a side-effect).

I think there is also an opportunity to improve the uniformity of review but I'm not quite sure how to go about doing that: E.g. checklists have a bad habit of turning people into zombies, but there are plenty of cases where well constructed ones have had profoundly positive effects. Unfortunately we still just don't have enough really strong reviewers.  That cannot be improved too quickly simply because it takes a lot of experience to become really effective. We have more contributors now than in the past though, so there is at least the potential that many of the new ones will stick around and mature into really good reviewers.

We also shouldn't lose perspective of the big things that we're getting right in other ways and the limitations of those approaches.  So for example, 9049 introduced a fault in the course of speeding up block propagation.  One alternative we've used since 2013 is implementing fast propagation externally: Doing so reduces the risk that new propagation stuff introduces bugs and also lets us try new techniques faster than would be possible if we had to upgrade the whole network.  Bitcoin Fibre has a new, radically different, ultra-super-mega-rocket science approach to propagating blocks. Fibre protocol's implementation has also been relatively chalk full of pretty serious bugs, but they haven't hurt Bitcoin in general because they're separate.  Unfortunately, we've found that with the separation few parties other than Matt's public fibre network run it, creating a massive single point of failure where if his systems are turned off block propagation speed will drop massively.  To address that, we've been creating simplfied subsets of that work, hardening them up, making them safe, and porting them to the network proper-- BIP152 (compact blocks) is an example of that.  A gap still exists (fibre has much lower latency than BIP152) and the gap is still a problem, since propagation advantages amplify mining centralization (as well as make selfish mining much more profitable)... but it's less of one.  This is just one of the ways we've all managed the tension between a security driven demand to improve performance and a security driven demand to simply and minimize risky changes. There just isn't a silver bullet.  There are lots of things we can do, and do do, but none magically solve all the problems. The most important thing is probably that we accept the scope of the challenge and face it with a serious and professional attitude, rather than pretending that something less than a super-human miracle is good enough. Smiley

gmaxwell
Moderator
Legendary
*
expert
Offline Offline

Activity: 4158
Merit: 8382



View Profile WWW
September 23, 2018, 08:04:04 PM
Last edit: September 23, 2018, 08:46:28 PM by gmaxwell
Merited by Foxpup (12), ABCbits (1)
 #35

but you were recently someone who would attack other development teams trying to "improve the world" in their own way with even more harsh terms and toxic insults
It would be more productive if you were specific instead of vague. The vague allegations, devoid of context, just come across as toxic themselves-- a character attack, rather than a complaint about something specific that could be handled better.

I believe my suggested improvements to you above-- subscribe to the list, look for things things you can do to help-- were specific, actionable, well justified, and hopefully not at all personally insulting.

Quote
encourage users to mess with it, possibly even with some sort of bounty scheme, and once sufficient time has passed and lots of different people and companies in the industry have tried to break it, you merge it in
In general people ignore "test" things, they just don't interact with them at all unless the thing in question is on rails that will put them somewhere critical reasonably soon.

We see this pattern regularly, when people put up complex PRs and explicitly mark them as not-merge-ready they get fairly small amounts of substantive review, and then after marking them merge ready suddenly people show up and point out critical design mistakes. Resources are finite so generally people prefer to allocate them for things that will actually matter, rather than things that might turn out to be flights of fancy that never get finished or included. They aren't wrong to do it either, there are a lot more proposals and ideas than things that actually work and matter.  You can also see it in the fact that far more interesting issues are reported by users using mainnet rather than testnet.  This isn't an argument to not bother with pre-release testing and test environments: both do help. But I think that we're likely already getting most of the potential benefit out of these approaches.

"testing" versions also suffer from a lack of interaction with reality. Many issues are most easily detected in live fire use, sadly.  And many kinds of problems or limitations are not the sort of thing that we'd rather never have an improvement at all than suffer a bit of a problem with it.

Additionally, how much delay do you think is required? This issue went undetected for two years. Certainly with lower stakes we could not expect a "testing" version to find it _faster_.  Sticking every change behind a greater than two year delay would mean a massive increase in exposure from the lack of beneficial changes, not to mention the knock on cultural effects: I should have cited it specifically, but consider the study results suggesting that safety equipment like bike helments or seatbelts sometimes result in less safe behavior. Checklists can be valuable, for example, but have also been showen to cause a reduction in personal responsibility and careful consideration. "I don't have to worry about reviewing this thing, because any issues will be found by the process and/or will be someone elses problem if they get through".

As I said above, if we had a series of concerning issues being found shortly after releases it might be a good case for going more slowly, generally. But isn't the case. With only few exceptions when we find issues that are concerning enough to justify a fast fix they're old issues.  We do find many issues quickly, but before they're merged or released.

I've generally found that for software quality each new technique has its own burst of benefit, but continued deeper application of the technique has diminishing marginal returns. Essentially, any given approach to quality has a class of problems it stops substantially while being nearly blind to other kinds of issues. Even limited application tends to pick the low hanging fruit.  As a result its usually better to assure quality in many different ways by many different people, rather than dump inordinate effort into one or a few techniques.

Quote
You claim that moving slower would potentially result in less testing, which makes no sense.[...] I can't fathom how moving slower wouldn't help here.
I'm disappointed, I think I explained directly and via analogy as to why this is the case but it doesn't seem to have been communicated to you. Perhaps someone else will give a go at translating the point, if its still unclear. Sad
aliashraf
Legendary
*
Offline Offline

Activity: 1456
Merit: 1174

Always remember the cause!


View Profile WWW
September 23, 2018, 11:12:16 PM
Last edit: September 25, 2018, 04:32:11 PM by aliashraf
Merited by Quickseller (2)
 #36

Greg Maxwell,
I truly appreciate your responsible presence here, but let me to shed a different light on this issue. The way I understand the problem we have multiple factors involved:

1- Bitcoin has seamlessly grown from an experimental project to a mission critical system. A critica vulnerability exploit is totally intolerable and will cause serious damages to a large population. It is what have agitated and reminded us about the critical aspects of the issue and has made theymos to start this topic.

2- Although as a consensus based protocol bitcoin is a social contract that inherently resists change and treats it as an existentially paradoxical subject, evolution is inevitable just like any other live system. It suggests the probability of an unusual development process beyond common practices familiar in software engineering.

3- As the first widely adopted decentralized system which surprisingly happened to be implemented in one of the most sensitive fields ever: monetary systems, bitcoin is an unprecedented social experiment. As much as it is exciting, there exist consequences with the most critical one being the ambiguity in governance.

4- Any system with governance problem is suspected to lose direction and being subject to technocratic decision makings. Lack of vision and strategy escalates spars, discreet developments which are typically reduced to pure technical processes.


5- Both as a result of decentralization and governance issues on one hand and because of a tradition based on immutability of blockchain data as an initial requirement in bitcoin on the other hand, the client software is supposed to be downward compatible,  the original blockchain is to be maintained and bootstrapping fresh nodes from genesis block to the whole chain should be supported. As a result, developers and contributors use sophisticated techniques to comply with this requirements.

6- Because of the last two factors, bitcoin now suffers from a software engineering problem: software bloat. Through years, incremental developments plus efforts for keeping system downward compatible and insisting on soft forks against hard forks has ended to a situation in which bitcoin code is becoming more and more complicated and hard to understand/contribute and maintain.

I know you can argue in favor of softforks or downward compatibility or bootstrapping very persuasively but everything comes with a price and if bitcoin was a simple centralized system with no governance issues, we would have it completely redesigned and rewritten from scratch 2 or 3 times in almost 10 years after its first release, in the most conservative scenario. So, it is not about how good is downward compatibility,  we are just used to suppose there is no other option.

Now we are here. We can't  disrupt governance situation but there are definite reconsideration opportunities that we have to embrace. For instance suppose somebody (a reckless person like me Wink ) suggests maintaining blockchain since Genesis and bootstrapping from it is not necessary and we could use a snapshot of UTXO which after being confirmed by like 10,000 blocks cumulatively, the historical data behind it would become relaxed.

As a common practice and before CVE-2018-17144 It would be a controversial proposal and won't get any support because nobody was aware of the critical situation with code from a software engineering point of view and the impact of such a proposal on making code an order of magnitude more elegant and straightforward would not be weighed properly.

Now, in post CVE-2018-17144 era, we have to be ways more open to any proposal that helps making/keeping the code as smart and compact as possible. It is a general paradigm shift, we should embrace it and help it to happen with the least possible casualties and disasters. In this new era code simplicity and beauty comes first, and we MUST put it in the top of our priority list.



Cøbra
Bitcoin.org domain administrator
Full Member
***
Offline Offline

Activity: 123
Merit: 469


View Profile WWW
September 23, 2018, 11:30:37 PM
 #37

It would be more productive if you were specific instead of vague. The vague allegations, devoid of context, just come across as toxic themselves-- a character attack, rather than a complaint about something specific that could be handled better.

You have attacked me with character attacks by viciously claiming on Reddit that I had sold my credentials, and even sent me an e-mail out of the blue one day asking how much I "sold out" for. I don't have time to look through your extensive post history on /r/btc, but I remember you spent years wrestling with pigs and constantly harassing and deriding people that wanted to "improve the world". I attacked these people too in similar ways, and many of them were incompetent, but I think you out of all people aren't in a position to preach the value of polite discourse, since you're one of the more toxic/controversial figures in the Core team.

I'm disappointed, I think I explained directly and via analogy as to why this is the case but it doesn't seem to have been communicated to you. Perhaps someone else will give a go at translating the point, if its still unclear. Sad

Yes, this was your analogy:

Imagine a bridge construction crew with generally good safety practices that has a rare fatal accident. Some government bureaucrat swings in and says "you're constructing too fast: it would be okay to construct slower, fill out these 1001 forms in triplicate for each action you take to prevent more problems".  In some kind of theoretical world the extra checks would help, or at least not hurt.  But for most work there is a set of optimal paces where the best work is done.  Fast enough to keep people's minds maximally engaged, slow enough that everything runs smoothly and all necessary precautions can be taken.  We wouldn't be to surprised to see that hypothetical crew's accident rate go up after a change to increase overhead in the name of making things more safe: either efforts that actually improve safety get diverted to safety theatre, or otherwise some people just tune out, assume the procedure is responsible for safety instead of themselves, and ultimately make more errors.

This analogy is flawed and makes no sense. Bridge construction is completely different from software engineering through the open source process. Construction is a linear thing, you can't build multiple "prototypes" of real physical bridges and test them and choose between them, you need to build the entire thing once, and in this context you would be correct that it makes more sense to keep minds maximally engaged. But in Bitcoin Core, developers can work in their own branches with total freedom, and no red tape, so I fail to see how they wouldn't be engaged? There's nothing stopping them from working "optimal paces" in their own branch and then opening a pull request after their sprint to try to get the change merged in. There already exists a testing/review step, IMO there's no harm in making this step slightly longer and encouraging the community to try to break and mess up a new feature. Bounties can be paid to try to break stuff too.

Anyway I'm exiting this discussion because I feel like we're going to go around in circles and derail the thread, and I've said what I wanted to say. I think we should take the personal issues to PM or something. Cheers.
gmaxwell
Moderator
Legendary
*
expert
Offline Offline

Activity: 4158
Merit: 8382



View Profile WWW
September 24, 2018, 12:57:31 AM
Last edit: September 24, 2018, 03:33:49 AM by gmaxwell
Merited by Foxpup (8)
 #38

You have attacked me with character attacks by viciously claiming on Reddit that I had sold my credentials,

This is the post that you're referring to:

Quote
I get the impression that cobra sold his credentials last year: He put up some sketchy warnings about the binaries on bitcoin.org then went quiet for a long time.

When he came back he started posting some really over the top rbtc conspiracy theory nonsense on reddit. When people moved to take action about it he suddenly said "oh my account was hacked" and dropped it. But the account wasn't use for the kind of petty vandalism that you normally see when a hacked account can't otherwise be used... Since then he's been slowly cranking up the psycho behavior, and right now he seems not far from the sudden behavior of the 'hacked' account.

Given that I'm not surprised to see the BCH pumping, and of course ignoring that whatever "better for payments" argument you can currently make for BCH could better be applied to Litecoin (which also has a lower interblock interval, AND segwit) and yet litecoin has mostly gone nowhere.

FWIW, no one wants a POW change more than Bitmain. They crank out chips privately for even obscure POWs then dump them on the public once they've reached diminishing returns on their own production. With sha256d they're competing against a huge installed base. Moreover, Bitmain has gone around unethically and unlawfully claiming patents on basic mining techniques like series wiring the chips to reduce convert costs which were in use prior to Bitmain and where any competitive mining device for any POW would adopt the same techniques.

I apologize for insulting you, it was really not my goal.  I'm not sure what else I was supposed to think when one day you're asking blockstream for money (and suggesting bitcoin.org/maybe you were broke) and then later [edit: I thought it was a few weeks, but it may just be that I only noticed the message then or there might have been more than one. I no longer have my blockstream account so I can't tell] started posting things like Merchant adoption will come naturally once people realize that the other coin is crippled by Blockstream and /u/nullc and that they can't transact without paying outrageous transaction fees. [...] Of course bitcoin.org should be changed to embrace Bitcoin Cash. Blockstream coin is not Bitcoin. [...] It's a form of censorship by Blockstream Core.  [...] This is what AXA invested in them for, to cripple the network. . I'd never seen you say anything like that before. And even more recently you continue to say things that look a lot like it to me, also, also, also (especially weird since you yourself told us downloads on bitcoin.org were unsafe, you seemed to think alternative downloads were a good idea, and then a year later are angry about it), also, also.   It's okay for you to go around suggesting "compromised by the NSA" without not a single shred of evidence, but  you think it's toxic for me to say that I "get [an] impression" and point out an apparent radical change in your behaviour? Sad

Theymos says he thinks you've been consistent all along, I trust him to know, but it's not like my comments were coming out of nowhere.  I had no reason to dislike you previously, in fact almost the sum total of my other interaction with you was stepping up to defend you when I thought people were unfairly attacking you after you said something easily misunderstood (like the 'revise the whitepaper' thing).

Why do you find it so insulting that I wondered if you sold your credentials -- with an explanation of my concerns-- after you start attacking someone whos done AFAIK nothing but support you previously but seem to think it's okay to spread worse claims about other people?

I'd say "what would you do in my shoes"-- but it seems like the answer is that you'd make accusations and not even provide evidence.  Is that really your intent?

Quote
but I remember you
Memory is a tricky thing. In fact, when writing the above I thought you multiple times also posted additional things that turned out to actually be people that responded to the things above agreeing with your attacks, but which weren't actually said by you-- sorry about that, but at least I haven't accused you of those things because I actually checked.

I am one of the only project contributors who actually takes the time to even try to communicate with people who seem to be significantly confused. 99.9% of the time other people will just ignore them completely. I don't think it helps improve the level of discourse if everyone puts themselves in a high castle and doesn't even hear out their opposition or respect them enough to even argue the case.  But at least when I'm critical of your actions I'm willing to be precise enough that you can defend or contextualize them... or even admit a mistake.  I've certainly made mistakes, but at least I've tried to do something good. I fee like your comments here-- with name calling like "incompetent"-- are saying that you'd prefer a world where no one does anything (except maybe insult and conspiracy theorize about others), because if the they do enough good things you'll ignore all that and attack them for the few things that could be improved.  If that isn't what you're going for, I'd really like you to help me understand where you're coming from.

Quote
But in Bitcoin Core, developers can work in their own branches with total freedom, and no red tape,
And no review, which was my point.

Quote
IMO there's no harm in making this step slightly longer
Perhaps, but I don't see how slightly longer connects with your post. Already the major release cycle is six months long.  This issue took two years to discover, making the cycle seven months long would not have made it get detected. But it might plausibly make people take review less seriously.  I guess my point there is that we've already made it a lot more than slightly longer, and tapped out the benefit from doing so, further increases might tip us further into the realm of costs exceeding benefits and there are other things we can do that don't add delays but would do more to prevent serious problems.
chek2fire
Legendary
*
Offline Offline

Activity: 3402
Merit: 1142


Intergalactic Conciliator


View Profile
September 24, 2018, 01:14:22 AM
Last edit: September 24, 2018, 09:23:49 PM by chek2fire
 #39

I like to add my opinion here to this very interest discussion.
Maybe is time Bitcoin to focus again to be stable unbanking safe heaven money than to be another funky cryptocurrency. For this reason imo Bitcoin must have rarely radical upgrades.
It seems the last two years Bitcoin devs forced by this ridiculous fork and big block drama to be more creative, to prove to everyone that they can bring Bitcoin to the next level, an uneccesry movement imo because bitcoin was not design to be a funky cryptocurrency but a stable decentralised money that everyone can trust value to it.
Unfortunately radical changes to code for 100% will bring and bugs. This happen in every software engineer history even form the most brilliant developers.

Quote
---offtopic---
Cobra account seems to be controlled by different persons and this is the reason why this account has so many different reactions, even his twitter account forget about two month campaign against Halong mining . This account is one other of the many paradoxs in Bitcoin world Tongue. Is a very funny paradox for sure Cheesy
https://pbs.twimg.com/media/DnZeAqAV4AAJqac.jpg
https://pbs.twimg.com/media/DnZeFeeUUAEeBxQ.jpg

http://www.bitcoin-gr.org
4411 804B 0181 F444 ADBD 01D4 0664 00E4 37E7 228E
Cøbra
Bitcoin.org domain administrator
Full Member
***
Offline Offline

Activity: 123
Merit: 469


View Profile WWW
September 24, 2018, 03:07:38 AM
 #40


You apologize, only to spit in my face with more vicious attacks. My Reddit account was compromised at that time, but I quickly regained access to it. I told this fact to a few people who contacted me in concern, and thought the issue was put to rest, but it turns out it's being intentionally resurfaced to discredit me.

And "asking Blockstream for money" because I was "broke"? Seriously? I contacted a whole bunch of businesses about sponsorships for bitcoin.org, something I've done for a while. I've pasted the email below. Your timeline of events is wrong, the compromised posts are from Sept 2017, but this email was sent in May 2016. So it wasn't "a few weeks later" that my Reddit account posted those things. You are being intentionally deceptive, vague and making up timelines to make me seem more erratic and malicious. I might be distrustful of Blockstream (I don't trust most American technology companies, and I didn't trust the Foundation too much either), but when you make up timelines, misconstrue things, and behave like an amateur NSA PSYOP agent, it doesn't help your case.

Back on topic, I think there's two sets of Core users: those who run their node and rarely update it, and the more enthusiastic ones who keep up with upgrades. It might make sense to have a LTS version with more thoroughly tested and vetted consensus critical code (that's proven itself), and a regular version. I think more choice and flexibility could be useful here.

Obviously it won't catch all bugs, there will always be bugs, but it might help minimize it. Though with all the eyes on the code now after the recent bug, especially around optimizations, maybe more people will be ready to point out critical flaws.

Quote
Date: Mon, 30 May 2016 11:38:52 +0000
From: =?UTF-8?Q?C=C3=B8bra?= <domain@bitcoin.org>
To: <inquiries@blockstream.com>
Cc: "Gregory Maxwell" <greg@xiph.org>
Message-Id: <15501759afd.df43a501857.5742453511502958330@bitcoin.org>
Subject: Bitcoin.org Sponsorship
MIME-Version: 1.0
Content-Type: multipart/alternative;
    boundary="----=_Part_2033_1126291026.1464608332543"
X-Priority: Medium
User-Agent: Zoho Mail
X-Mailer: Zoho Mail
X-ZohoMail-Sender: Cøbra

------=_Part_2033_1126291026.1464608332543
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 7bit

Hey,

Bitcoin.org is currently looking for a new sponsor (previously we were sponsored by the Bitcoin Foundation) and we were wondering if Blockstream would be interested in supporting the site financially. The site continues to get large increases in traffic, and we want to ensure that the site remains fast, online and secure well into the future. Bitcoin.org is the first place most new users go to learn about bitcoin, it teaches them how Bitcoin works, and helps them get set up with a wallet. The site's content has been translated into many languages, and any user is free to make a pull request on Github to improve the site.


If this opportunity is something that interests you, then please let me know, and we can discuss further the details of a sponsorship arrangement. Thanks.


------=_Part_2033_1126291026.1464608332543
Content-Type: text/html; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable

<!DOCTYPE html PUBLIC "-//W3C//DTD HTML 4.01 Transitional//EN"><html><head>=
<meta content=3D"text/html;charset=3DUTF-8" http-equiv=3D"Content-Type"></h=
ead><body ><div style=3D'font-size:10pt;font-family:Verdana,Arial,Helvetica=
,sans-serif;'>Hey,<div><br></div><div>Bitcoin.org is currently looking for =
a new sponsor (previously we were sponsored by the Bitcoin Foundation) and =
we were wondering if Blockstream would be interested in supporting the site=
 financially. The site continues to get large increases in traffic, and we =
want to ensure that the site remains fast, online and secure well into the =
future. Bitcoin.org is the first place most new users go to learn about bit=
coin, it teaches them how Bitcoin works, and helps them get set up with a w=
allet. The site's content has been translated into many languages, and any =
user is free to make a pull request on Github to improve the site.</div><di=
v><br></div><div>If this opportunity is something that interests you, then =
please let me know, and we can discuss further the details of a sponsorship=
 arrangement. Thanks.</div></div></body></html>
------=_Part_2033_1126291026.1464608332543--
Pages: « 1 [2] 3 4 5 »  All
  Print  
 
Jump to:  

Powered by MySQL Powered by PHP Powered by SMF 1.1.19 | SMF © 2006-2009, Simple Machines Valid XHTML 1.0! Valid CSS!