Bitcoin Forum

Bitcoin => Development & Technical Discussion => Topic started by: theymos on September 22, 2018, 07:47:55 AM



Title: The duplicate input vulnerability shouldn't be forgotten
Post by: theymos on September 22, 2018, 07:47:55 AM
The bug fixed in Bitcoin Core 0.16.3 was really bad. IMO it was the worst bug since 2010. If it had been exploited in a 0-day fashion, significant & widespread losses (due to acceptance of counterfeit BTC) would've been likely, and Bitcoin's reputation would've long been tarnished. Furthermore, since a ton of altcoins are based on Bitcoin Core, this would've affected a huge swath of the crypto space all at once.

Everyone's human, and secure software engineering is largely an unsolved problem. The Bitcoin Core devs have done a remarkably good job over the years; in fact, in this case they were able to recognize that a bug report for a DoS attack was actually a critical consensus bug, and then they managed to roll out a fix in a way which ending up protecting Bitcoin. I am thankful for their work and diligence. However, the fact that this bug was introduced and then allowed to exist from 0.14.0 to 0.16.2 was undeniably a major failure, and if all of Bitcoin Core's policies/practices are kept the same, then it's inevitable that a similar failure will eventually happen again, and we might not be so lucky with how it turns out that time.

Finger-pointing would not be constructive, but neither would it be sufficient to say "we just need more eyes on the code" and move on. This bug was very subtle, and I doubt that anyone would've ever found it by actually looking at the code. Indeed, the person who found it did so when they were doing something else and ended up tripping the assertion. Furthermore, this bug probably wouldn't have been found through standard unit testing, since this was a higher-level logic error. (By my count, something like 18% of the entire Bitcoin Core repository is tests, but that still didn't catch it.)

Perhaps all large Bitcoin companies should be expected by the community to assign skilled testing specialists to Core. This vulnerability could've been detected through more sophisticated testing methods, and currently a lot of companies don't contribute anything to Core development.

Perhaps the Core release schedule is too fast. Even though it sometimes already feels painfully slow, there's no particular "need to ship", so it could be slowed down arbitrarily in order to quadruple the amount of testing code or whatever.

Perhaps there should be more support and acceptance for running older versions, or a LTS branch, or a software fork focused on stability. The official maintenance policy (https://bitcoincore.org/en/lifecycle/) says that the current and previous major release is supported, but that doesn't seem to be closely followed. In this bug, backports were written for 0.14.x and 0.15.x, but as of this writing no binaries have been released for those, even days after 0.16.3's release. SegWit didn't have any backports when it was released, even though users would've needed to enforce SegWit in order to achieve full security if SegWit had activated as quickly as originally hoped. Sometimes fixes are backported (https://github.com/bitcoin/bitcoin/commit/c1b7421781b7a53485c6db4a6005a80e32267c9f) to an old version's git branch but no release is actually made. If 0.13.x is not currently supported, then there was no supported version without the vulnerability. This might indicate that the maintenance period isn't long enough; there are a few hundred people still using 0.13.2, and they were the only Bitcoin users completely safe from this vulnerability.

I do not think that it would be constructive to turn to any of the full node total-reimplementations like btcd, which are very amateur in comparison to Bitcoin Core.

I don't know exactly how this can be prevented from happening again, but I do know that it would be a mistake for the community to brush off this bug just because it ended up being mostly harmless this time.


Title: Re: The duplicate input vulnerability shouldn't be forgotten
Post by: piotr_n on September 22, 2018, 08:50:24 AM
Since I might be the only person running an actual alternative, bitcoind independent implementation of a node, I promise to let you know when someone mines a broken block and all your software won't realize it.

Just try to not ban me from all your forums by that time :)


Title: Re: The duplicate input vulnerability shouldn't be forgotten
Post by: DooMAD on September 22, 2018, 09:28:57 AM
I know this is probably the last argument most people want to hear, but is this not a case where more independent implementations would result in less risk?  If you maintain that one particular client should form the "backbone of the network", you have to consider what happens if that backbone breaks.  If there were a wider variety of clients being run, there may have been less of a threat to the network overall?

Core have done exceptional work, but at the end of the day, they're still only human.  Assigning more people to keep an eye on one codebase might help mitigate faults, but if there's only one main codebase, there's still going to be an issue if an error slips through the net.  Hence my belief that more codebases would create a stronger safeguard.


Title: Re: The duplicate input vulnerability shouldn't be forgotten
Post by: LeGaulois on September 22, 2018, 11:54:53 AM
Quote
Perhaps all large Bitcoin companies should be expected by the community to assign skilled testing specialists to Core. This vulnerability could've been detected through more sophisticated testing methods, and currently a lot of companies don't contribute anything to Core development.


Why companies should contribute to Core? To get the possibility in the future to say "Powered by <insert company name>" ?

There are a lot of people who won't trust cryptocurrency anymore, especially the newbies and the people who weren't interested in crypto. And the fact that the bug has been "accidentally" found is not so reassuring. Who knows what would have happened if the bug would not be fixed? Maybe in 2010 it wasn't a big deal since but in 2018, the results could be a lot different (I am not talking technically)



Title: Re: The duplicate input vulnerability shouldn't be forgotten
Post by: aleksej996 on September 22, 2018, 01:05:01 PM
I agree with DooMad completely. Diversity is the only real solution to network security.

We have to deal with the fact that bugs are found in absolutely ALL software.
Only thing we can hope is that it is: A) rare that bugs are found in all software at the same time and B) that no single entity will likely have all of these bugs at that time.

You can put million top tier devs on the code and they will still let a bug go through every few decades,
Security of the entire Internet isn't that there is low level code that is perfectly secure, but that there are many different systems and implementations running it. You need to have more implementations to have any chance of long term survival.


Title: Re: The duplicate input vulnerability shouldn't be forgotten
Post by: achow101 on September 22, 2018, 03:15:32 PM
I know this is probably the last argument most people want to hear, but is this not a case where more independent implementations would result in less risk?  If you maintain that one particular client should form the "backbone of the network", you have to consider what happens if that backbone breaks.  If there were a wider variety of clients being run, there may have been less of a threat to the network overall?

Core have done exceptional work, but at the end of the day, they're still only human.  Assigning more people to keep an eye on one codebase might help mitigate faults, but if there's only one main codebase, there's still going to be an issue if an error slips through the net.  Hence my belief that more codebases would create a stronger safeguard.
What many people do not realize is that having people run different implementations makes it easier for attackers to partition the network and thus harder to resolve situations where vulnerabilities are exploited. Network partitioning can cause multiple blockchain forks which is a much harder situation to resolve than a single fork or an entire network shutdown.  It is not just that some nodes will go down and the rest are up and the network is still running. If the attack is directed in a certain way, miners will be separated and no longer connected to each other which then causes forks. Network partitioning is a serious issue, and running different implementations makes it easier for attackers to partition the network. So having multiple implementations and recommending that people run alternative software is really not a good thing.

That being said, having multiple implementations is good for the individual who runs multiple nodes with different implementations. With multiple nodes each with different software, attacks exploiting critical bugs lets them know if an attack is going on. If everyone ran multiple nodes with different implementations, then multiple implementations are fine. The network would not shutdown and there wouldn't be any network partitioning. But not everyone is going to do that.


Title: Re: The duplicate input vulnerability shouldn't be forgotten
Post by: Quickseller on September 22, 2018, 03:24:16 PM
If it had been exploited in a 0-day fashion, significant & widespread losses (due to acceptance of counterfeit BTC) would've been likely,
I am weary of this assertion. Many bitcoin related businesses use custom implementations of Bitcoin that are based on the underlying consensus rules. The same is true for the miners today (even if they broadcast they are using other implementations), so I doubt a single malicious actor could have gotten counterfeit BTC more than a small number of confirmations.

I would echo what DooMAD said in that the solution is to encourage more implementations, and for each implementation to not have a high percentage of overall nodes.

At the end of the day, the Bitcoin network is nothing more than a bunch of consensus rules that everyone follows.  

What many people do not realize is that having people run different implementations makes it easier for attackers to partition the network and thus harder to resolve situations where vulnerabilities are exploited. Network partitioning can cause multiple blockchain forks which is a much harder situation to resolve than a single fork or an entire network shutdown. 

The solution to this is simple, and it is that the blockchain (whose tip contains the most cumulative work) that follows all of the consensus rules is the Bitcoin blockchain, and any fork of this is not (in many cases, it would be an altcoin intentionally created).

The incentive to attack an implementation that is used by 10%-20% of the Bitcoin network is much smaller than the incentive to attack an implementation that affects 90% of the network. The further would be a minor hiccup, while the later has the potential to actually steal large amounts of money, and cause serious disruptions.


Title: Re: The duplicate input vulnerability shouldn't be forgotten
Post by: bones261 on September 22, 2018, 04:01:05 PM
If it had been exploited in a 0-day fashion, significant & widespread losses (due to acceptance of counterfeit BTC) would've been likely,

I am uncertain how any miner would have been able to spread counterfeit coins effectively, since the other aspect of the bug was to cause nodes to crash. Wouldn't this have hampered the transfer of coins on the renegade chain? The only strategy that I can see for an attacking miner would have been to implement shorts before the attack, and somehow close the shorts after the bad news has spread, but before the exchange(s) freeze the trading. They would then have to transfer the ill gotten funds off the exchange(s) in a hurry, before the victim exchange(s) caught on and froze their account.


Title: Re: The duplicate input vulnerability shouldn't be forgotten
Post by: achow101 on September 22, 2018, 05:27:33 PM
The solution to this is simple, and it is that the blockchain (whose tip contains the most cumulative work) that follows all of the consensus rules is the Bitcoin blockchain, and any fork of this is not (in many cases, it would be an altcoin intentionally created).
The complexity comes in ensuring that the network is no longer partitioned and that everyone has received the blockchain with the most cumulative work.

The incentive to attack an implementation that is used by 10%-20% of the Bitcoin network is much smaller than the incentive to attack an implementation that affects 90% of the network. The further would be a minor hiccup, while the later has the potential to actually steal large amounts of money, and cause serious disruptions.
I disagree.

Suppose there is an exchange that happens to be connected to some nodes that are vulnerable to some kind of attack. Or perhaps they aren't connected directly to those nodes, but connected to node which are connected to those nodes. Suppose this attack causes those nodes to go offline or otherwise become disconnected from the network. Even if there are a very small number of these vulnerable nodes, if they happen to form a ring around the exchange, an attacker can attack those nodes and cause the network to partition. It would break into at least two pieces: the chunk containing the exchange, and the rest of the network. The attacker, if he has some hashrate, can now be mining a fork of the blockchain specifically created so that he can attack this exchange. Since this is a fork for a part of the network that is no longer receiving the rest of the blockchain, this attacker has 100% of the hashrate for that fork and can do everything with it that anyone with >51% of the hashrate can do. This kind of attack does not need a large number of nodes to be vulnerable, it just needs enough so that an attacker can partition the network.


Title: Re: The duplicate input vulnerability shouldn't be forgotten
Post by: piotr_n on September 22, 2018, 06:21:27 PM
Fork would be much less of a problem than building block chain with blocks that inflate the coin supply without anyone realizing.

Existence of an unexpected fork is a serious alarm signal that the entire system can act upon, e.g by stopping any economic activity until the situation clears up.

But what are you going to do upon realizing that e.g. for the past week someone has been mining blocks that increased the amount of coins in his wallet by 10% of the extra BTC supply?

Obviously, as soon as you realize the screw up you make sure everyone upgrades the software.
But how are you going to handle the existing damage?
Well, I think in such case you basically end up with only two choices:

1) You let the guy keep the money he created out of a thin air, most of which he probably already spent anyway - all the expense of all the other bitcoin holders.

2) You invalidate all the blocks (transactions) from the past week - pushing the damage on the ones that accepted any payments during that time.

You can also think of invalidating the coins that were "added illegally", kind of like ethereum did with DAO hack (although they invalidated a reallocation of coins, not creation of new ones).
But assuming that the guy who came out with the exploit wasn't an idiot, they are long spent already and mixed with a "legal" coins, so that's not really an option.

My point is: whenever such a catastrophic event happens we want to know about it as soon as possible, to try preventing the damage.
That is why having only one software implementation is a very bad idea.


Title: Re: The duplicate input vulnerability shouldn't be forgotten
Post by: Coding Enthusiast on September 22, 2018, 06:49:00 PM
Suppose there is an exchange that happens to be connected to some nodes that are vulnerable to some kind of attack. Or perhaps they aren't connected directly to those nodes, but connected to node which are connected to those nodes. Suppose this attack causes those nodes to go offline or otherwise become disconnected from the network. Even if there are a very small number of these vulnerable nodes, if they happen to form a ring around the exchange, an attacker can attack those nodes and cause the network to partition.

But you just described Sybil Attack which can happen with or without multiple implementations of bitcoin. It doesn't even need a vulnerability or hashrate to happen.

I do say we need more implementations of bitcoin from scratch. It won't be a perfect solution and it will take time for the software to reach maturity of bitcoin core but the benefits of it are more.


Title: Re: The duplicate input vulnerability shouldn't be forgotten
Post by: achow101 on September 22, 2018, 07:09:43 PM
My point is: whenever such a catastrophic event happens we want to know about it as soon as possible, to try preventing the damage.
That is why having only one software implementation is a very bad idea.
That's why I said in an earlier post that having the same person run multiple software is good. I don't think that if everyone was running different software that the problem would be noticed significantly faster than if everyone were using the same software.

But you just described Sybil Attack which can happen with or without multiple implementations of bitcoin. It doesn't even need a vulnerability or hashrate to happen.
It's easier to perform this type of attack when you can get other people to voluntarily be your sybil nodes. That's what multiple implementations do in this situation: other people are voluntarily being the sybil nodes.


Title: Re: The duplicate input vulnerability shouldn't be forgotten
Post by: piotr_n on September 22, 2018, 07:19:02 PM
My point is: whenever such a catastrophic event happens we want to know about it as soon as possible, to try preventing the damage.
That is why having only one software implementation is a very bad idea.
That's why I said in an earlier post that having the same person run multiple software is good. I don't think that if everyone was running different software that the problem would be noticed significantly faster than if everyone were using the same software.

It doesn't have to be the same person.
Like in this case, if such an event happened that there was a block that was trying to spend the same input twice, my node would just not let it through and got stuck on one block, which would make me to analyze why, which would maybe trigger an alarm.
It's obviously better if more people run the node like mine. Because sometimes I'm busy :)


Title: Re: The duplicate input vulnerability shouldn't be forgotten
Post by: Coding Enthusiast on September 22, 2018, 07:27:09 PM
But you just described Sybil Attack which can happen with or without multiple implementations of bitcoin. It doesn't even need a vulnerability or hashrate to happen.
It's easier to perform this type of attack when you can get other people to voluntarily be your sybil nodes. That's what multiple implementations do in this situation: other people are voluntarily being the sybil nodes.
I may be wrong about this but the way I see it, like everything else such as block size this is purely about what is less bad.

On one hand we have a network of nodes that all run one implementation that if that has a vulnerability which is exploited the whole network will be crippled and the damage will be big.
On the other hand we have new different implementation(s) that might have vulnerabilities and might introduce attack surfaces that will not cripple the network and we have ways to fight these attacks to some extent. So any damage done won't be near as big.

What is the damage of this sybil attack? Some exchange and the traders losing money? That is not new.
What is the damage of a vulnerability like this being exploited? We would be forced to do a "roll back" and lose immutability of bitcoin.


Title: Re: The duplicate input vulnerability shouldn't be forgotten
Post by: aleksej996 on September 22, 2018, 10:19:05 PM
Yeah, what are the chances of this never happening again?
Very small, we need to accept it. Better a fork then a complete failure.

The way I see it we need to start doing 2 things as a community:
1) Operators that are running full nodes that want to contribute more to the network should start running more nodes with different implementations.
2) We should connect our nodes to nodes running different implementations and have warnings pop up in case of a chain-split.
3) We need more good quality implementations.

Even this will not protect us 100% from a potential network-wide vulnerability, but it is a hell of a lot better.


Title: Re: The duplicate input vulnerability shouldn't be forgotten
Post by: theymos on September 22, 2018, 11:36:01 PM
I am uncertain how any miner would have been able to spread counterfeit coins effectively, since the other aspect of the bug was to cause nodes to crash.

Did you read the full disclosure (https://bitcoincore.org/en/2018/09/20/notice/)? 0.14.x would always crash, but 0.15.0-0.16.2 could in some circumstances not crash, accepting the creation of counterfeit BTC as if it were normal.


Title: Re: The duplicate input vulnerability shouldn't be forgotten
Post by: bones261 on September 23, 2018, 12:45:12 AM
I am uncertain how any miner would have been able to spread counterfeit coins effectively, since the other aspect of the bug was to cause nodes to crash.

Did you read the full disclosure (https://bitcoincore.org/en/2018/09/20/notice/)? 0.14.x would always crash, but 0.15.0-0.16.2 could in some circumstances not crash, accepting the creation of counterfeit BTC as if it were normal.

I think that I am misunderstanding what exactly this means.

Quote
However, if the output being double-spent was created in a previous block, an entry will still remain in the CCoin map with the DIRTY flag set and having been marked as spent, resulting in no such assertion.

Are they talking about the double spend input, that uses a previously created UTXO? Or are the talking about the newly created UTXO that had two double spend inputs and now has a block built on top of it?

Edit: I understand this better now. This answer on slack helped clear things up for me.  https://bitcoin.stackexchange.com/questions/79481/how-does-the-most-recently-found-critical-vulnerability-cve-2018-17144-work


Title: Re: The duplicate input vulnerability shouldn't be forgotten
Post by: Quickseller on September 23, 2018, 06:43:42 AM
The incentive to attack an implementation that is used by 10%-20% of the Bitcoin network is much smaller than the incentive to attack an implementation that affects 90% of the network. The further would be a minor hiccup, while the later has the potential to actually steal large amounts of money, and cause serious disruptions.
I disagree.

Suppose there is an exchange that happens to be connected to some nodes that are vulnerable to some kind of attack. Or perhaps they aren't connected directly to those nodes, but connected to node which are connected to those nodes. Suppose this attack causes those nodes to go offline or otherwise become disconnected from the network. Even if there are a very small number of these vulnerable nodes, if they happen to form a ring around the exchange, an attacker can attack those nodes and cause the network to partition. It would break into at least two pieces: the chunk containing the exchange, and the rest of the network. The attacker, if he has some hashrate, can now be mining a fork of the blockchain specifically created so that he can attack this exchange. Since this is a fork for a part of the network that is no longer receiving the rest of the blockchain, this attacker has 100% of the hashrate for that fork and can do everything with it that anyone with >51% of the hashrate can do. This kind of attack does not need a large number of nodes to be vulnerable, it just needs enough so that an attacker can partition the network.
Exchanges generally do not advertise the details of their nodes, so an attacker would not know that a specific vulnerability affects xx exchange. Further, an exchange may build their own implementation that may be very similar to another implementation.

In regards to your sybil attack, businesses customize their node software to ensure that their nodes connect to multiple implementations, and if block information varies between multiple implementations, they would know to troubleshoot to figure out what is going on. I also suspect that most exchanges handling significant amounts of money are using multiple nodes using multiple implementations already.

Even if what you say is true, which as I mention above, I disagree with, this would only affect individual businesses, not the entire network. Today, businesses get hacked on a fairly regular basis, and confidence is generally not lost as a result of this.


Title: Re: The duplicate input vulnerability shouldn't be forgotten
Post by: aliashraf on September 23, 2018, 07:27:40 AM
It is very encouraging to see a prominent figure like OP starts a topic about this event admitting that there are lessons to be learned and measures to be taken. kudos theymos.  :)

I understand there are many people with various incentives mostly not in the best interests of bitcoin who may find such an event a good opportunity to take advantage in an irresponsible and unproductive way. Although I have a reputation on this forum to be somewhat discontented with Core devs, I hope my statements here other than being helpful in the context of this topic, would show how committed I'm to bitcoin as a liberating movement instead of blindly opposing and fighting with specific parties.

Seemingly, the "multiple implementations proposal" is trending right now in this thread. I don't know to what extent it could be considered a healthy proposal with good intentions but I'm sure it is neither correct nor practical to be considered an effective measure against vulnerabilities to implementation bugs. On the contrary it seems to be more dangerous rather than a helpful strategy.

Being decentralized does not change the fact that bitcoin is an integrated meta-system. Running multiple implementations of code in such a system is not recommended because it adds another complexity factor: Multiple implementations means more lines of code and hence more bugs. It will cause frequent chain splits and encourages a new range of sybil attacks.

Suggesting that no implementation should be used by a majority, besides its impracticality (who is in charge of imposing this rule?) does not seem to be of much help with this idea:
Firstly, it implies a new kind of consensus system, with no theoretical support. I'm not aware of any serious work covering a decentralized consensus based system that uses heterogeneity of implementations as a main or auxiliary security measure e.g. for immunity against software bugs.

Secondly, it is very unlikely that such an evenly distributed heterogeneity might help in damage control by localizing bugs. Nodes have to validate blocks independently and they don't apply longest/heaviest chain rule unless they have validated both chains beforehand. It is impossible for a wallet to feel something is wrong with its own code and decide to follow the majority blindly. One possible solution for such a schema would be running multiple implementations by a single federated node that uses a BFT algorithm to make final decisions. It has been suggested up-thread somehow and is infeasible because of  exaggerated costs and complexities involved.

Thirdly, adversaries would find it feasible to identify flaws in a single implementation and targeting its users (which typically are a minority) without compromising the whole ecosystem explicitly. In this scenario, although bitcoin won't collapse dramatically but the scammers might consider it an advantage as the funds they steal, preserve their value.

Conclusively, I would say this proposal, encouraging development and hiring multiple implementations of bitcoin protocols, shows negative indications and should be dropped as a serious proposal.

Hence the issue remains open: What happened and what measures should be taken to mitigate such disruptive problems?

I have something to suggest in this regard but for now I think it is more convenient to close the case with the multiple implementations proposal before proceeding any more.




Title: Re: The duplicate input vulnerability shouldn't be forgotten
Post by: arulbero on September 23, 2018, 07:46:39 AM
It seems to me that would be very simple to know immediately when a double spend occurs.

Every node knows the length of the blockchain, every node knows which blocks generate 50 / 25 / 12.5 btc, then the sum of all bitcoin in the UTXO set is a simple function of the blockchain's length: s = f(length).

If my node receive a block and the total of bitcoin in my UTXO set grows more than 12.5 btc,  I will detect immediately a double spend.


Title: Re: The duplicate input vulnerability shouldn't be forgotten
Post by: Quickseller on September 23, 2018, 08:00:59 AM
Multiple implementations means more lines of code and hence more bugs. It will cause frequent chain splits and encourages a new range of sybil attacks.

This comes down to one's central belief as to what 'Bitcoin" is.....is Bitcoin  a collection of consensus rules? Or is it a software maintained by a very small group of people?

If your argument is that Bitcoin is a software run by a small group of people (eg. those who decide what gets published every time there is an upgrade), then Bitcoin is much more centralized than most central banks.

This also comes down to another important question as to who exactly should be running a full node? It is my belief that businesses and those who are going to receive bitcoin prior to giving anything of value to their trading partners are those who should run a full node. Anyone who sends valuable property to someone prior to receiving bitcoin is already at risk of outright not receiving bitcoin in return, so the value of running a fully validating node is minimal. Further, it is the responsibility of those who are running a full node to personally ensure the software they are running is going to preform as they believe it will, and to not rely on third parties for this.


Title: Re: The duplicate input vulnerability shouldn't be forgotten
Post by: aliashraf on September 23, 2018, 09:55:55 AM
Multiple implementations means more lines of code and hence more bugs. It will cause frequent chain splits and encourages a new range of sybil attacks.

This comes down to one's central belief as to what 'Bitcoin" is.....is Bitcoin  a collection of consensus rules? Or is it a software maintained by a very small group of people?

If your argument is that Bitcoin is a software run by a small group of people (eg. those who decide what gets published every time there is an upgrade), then Bitcoin is much more centralized than most central banks.

This also comes down to another important question as to who exactly should be running a full node? It is my belief that businesses and those who are going to receive bitcoin prior to giving anything of value to their trading partners are those who should run a full node. Anyone who sends valuable property to someone prior to receiving bitcoin is already at risk of outright not receiving bitcoin in return, so the value of running a fully validating node is minimal. Further, it is the responsibility of those who are running a full node to personally ensure the software they are running is going to preform as they believe it will, and to not rely on third parties for this.
I agree both problems you are mentioning are very crucial but I afraid they are not relevant in this context.

Definitively bitcoin is a protocol and as a protocol it should be implemented somehow. One might buy a closed source or code down to the metal a proprietary software and participate in the protocol but when people use an open source implementation for this, things become a bit more complicated. In other cases it would be users own responsibility  to take care of their safety but for open-source software it is community's credit that is being spent and ensures users about their security.  

Obviously, it would be much safer for a community to take care of one implementation with fewer lines of codes.

As I understand, your concern is governance, but it is another topic (an unsolved one, I suppose) and there is no guarantee we could have an optimized solution for both immunity to software bugs and governance at the same time in one single package, at least right now.

As of your assertions about the limited value of running a full node, I do agree again, but when it comes to an implementation used by a significant number of users, this value accumulates to sensitive levels and people may lose both considerable amounts of money and (more importantly) their faith in the whole ecosystem.



Title: Re: The duplicate input vulnerability shouldn't be forgotten
Post by: DooMAD on September 23, 2018, 12:17:14 PM
Obviously, it would be much safer for a community to take care of one implementation with fewer lines of codes.

I think I can guess where this is heading.     ::)



That being said, having multiple implementations is good for the individual who runs multiple nodes with different implementations. With multiple nodes each with different software, attacks exploiting critical bugs lets them know if an attack is going on. If everyone ran multiple nodes with different implementations, then multiple implementations are fine. The network would not shutdown and there wouldn't be any network partitioning. But not everyone is going to do that.

Perhaps not everyone would need to.  If we adapt theymos' idea that larger Bitcoin companies should effectively place a small percentage of their employees on secondment with Core, what if instead they ran a Core node but also maintained a second node with their own business-oriented implementation?  Something that might focus more on features for merchants, for example.  Then they can report any inconsistencies and issues like would be far less likely to go unnoticed for 18 months?  I'm unsure how many companies it would take for the idea to be effective, but if it worked, that would help create a decent safety net.




Title: Re: The duplicate input vulnerability shouldn't be forgotten
Post by: Cøbra on September 23, 2018, 04:49:14 PM
I think the problem is that Core has too many developers, but not enough review/testing, and there's simply too much going on at once. When a lot of the changes are optimizations, refactoring, and the occasional networking/consensus touching stuff, it's inevitable you will get situations where a scary bug like this slips through due to all the emergent complexity. It's not like all these guys are doing GUI features. I like the idea of a fork with only necessary consensus changes (as these are very well tested). There should be two choices between reference implementations; "safe but slow" and "slightly more risky but fast". The more I think about it, the more a LTS version with only consensus changes and critical bug fixes makes a lot of sense in the context of Bitcoin.

To be honest, some really strange decisions get made at Bitcoin Core. For example, right now, a huge majority of the network is running vulnerable software, but because the alert system was removed, there's no way to reach them and tell them to upgrade. We just have to pray they check certain sites. Makes no sense. Some sort of alert feature in software that handles money is a necessity, it could have been enabled by default with an option to disable it. Satoshi was so smart and practical to add that.


Title: Re: The duplicate input vulnerability shouldn't be forgotten
Post by: gmaxwell on September 23, 2018, 04:53:22 PM
Just responding to this particular fork of discussion now,

I know this is probably the last argument most people want to hear, but is this not a case where more independent implementations would result in less risk?

They would create more risk. I don't think there is any reason to doubt that this is an objective fact which has been borne out by the history.

First, failures in software are not independent. For example, when BU nodes were crashing due to xthin bugs, classic were also vulnerable to effectively the same bug even though their code was different and some triggers that would crash one wouldn't crash the other. There have been many other bugs in Bitcoin that have been straight up replicated many times, for example the vulnerable to crash through memory exhaustion due to processing loading a lot of inputs from the database concurrently: every other implementation had it to, and in some it caused a lot more damage.

Even entirely independently written software tends to have  similar faults:

Quote
By assuming independence one can obtain ultra-reliable-level estimates of reliability even though the individual versions have failure rates on the order of 10^-4. Unfortunately, the independence assumption has been rejected at the 99% confidence level in several experiments for low reliability software.

Furthermore, the independence assumption cannot ever be validated for high reliability software because of the exorbitant test times required. If one cannot assume independence then one must measure correlations. This is infeasible as well—it requires as much testing time as life-testing the system because the correlations must be in the ultra-reliable region in order for the system to be ultra-reliable. Therefore, it is not possible, within feasible amounts of testing time, to establish that design diversity achieves ultra-reliability. Consequently, design diversity can create an “illusion” of ultra-reliability without actually providing it.
(source) (https://tatourian.blog/2016/11/19/building-ultra-reliable-software/)

More critically, the primary thing the Bitcoin system does is come to consensus:  Most conceivable bugs are more or less irrelevant so long as the network behaves consistently.  Is an nlocktime check a < or a <= check? Who cares... except if nodes differ the network can be split and funds can be taken through doublespending the reorg.  Are weird "hybrid pubkeys" allowed? Who cares... except if nodes differ the network can be split and funds can be taken through doublespending the reorg.  What happens when a transaction has a non-nonsensical signature that indicates sighash single without a matching output? Who cares... except if nodes differ the network can be split and funds can be taken through doublespending the reorg. And so on.  For the vast majority of potential bugs having multiple incompatible implementations turns a non-event into a an increasingly serious vulnerability.

Even for an issue which isn't a "who cares" like allowing inflation of the supply adding a surprise disagreement between nodes does not help matters!  The  positive effect it creates is that you might notice it after the fact faster but for that benefit you don't actually need additional full implementations, just monitoring code.  In the case of CVE-2018-17144 the sanity checks on node startup would also detect the bad block. On the negative, they just cause a consensus split and result in people being exposed to funds loss in a reorg-- which is the same kind of exposure that they'd have in a single implementation + monitoring then fixing the issue when detected, other than perhaps the reorg(s) might be longer or shorter in one case or another.

In this case ABC also recently totally reworked the detection of double spends as a part of their "canonical transaction order" changes which require that double spending checks be differed and run as a separate check after processing because transactions are required to be out of their logical casual ordering.  Yet they both failed to find this bug as part of testing that change and also failed to accidentally fix it.

Through all the history of Bitcoin, altcoin forks copying the code, other reimplementations and many other consensus bugs, some serious and some largely benign,  this is the first time someone working on another serious implementation that someone actually uses has actually found one.  I might be forgetting something, but I'm certainly not forgetting many. Kudos to awemany. In all other cases they either had the same bugs without knowing it, or accidentally fixed it creating fork risk creating vulnerability without knowing it. It also isn't like there being multiple implementations is new-- for the purpose of this point every one of the 1001 altcoins created by copying the Bitcoin code count as a separate implementation too, or at least the ones which are actively maintained do.

Having more versions also dilutes resources, spreading review and testing out across other implementations. For the most part historically the Bitcoin project itself hasn't suffered much from this: the alternatives have tended to end up just barely maintained one or two developer projects (effectively orphaning users that depended on them, -- another cost to that diversity) and so it doesn't look like they caused much meaningful dilution to the main project.  But it clearly harms alternatives, since they usually end up with just one or two active developers and seldom have extensive testing.  Node diversity also hurts security by making it much more complicated to keep confidential fixes private. When an issue hits multiple things its existence has to be shared with more people earlier, increasing the risk of leaks and coincidentally timed cover changes can give away a fix which would otherwise go without notice. In a model with competing implementations some implementers might also hope to gain financially by exploiting their competitions bugs, further increasing the risk of leaks.

Network effects appear to have the effect that in the long run one one or maybe a few versions will ultimately have enough adoption to actually matter. People tend to flock to the same implementations because they support the features they need and can get the best help from others using the same stuff. Even if the benefits didn't usually fail to materialize, they'd fail to matter through disuse.

Finally, diversity can exist in forms other than creating more probability disused, probably incompatible implementations.  The Bitcoin software project is a decentralized collaboration of dozens of regular developers.  Any one of the participants there could go and make their own implementation instead (a couple have, in addition to).  The diversity of contributors there do find and fix and prevent LOTS of bugs, but do so without introducing to the network more incompatible implementations.   That is, in fact, the reason that many people contribute there at all: to get the benefit of other people hardening the work and to avoid damaging the network with potential incompatibilities.

All this also ignores the costs of implementation diversity unrelated to reliability and security, such as the redundancy often making improvements more expensive and slow to implement.

TLDR: In Bitcoin the system as a whole is vulnerable to disruption of any popular implementation has a consensus bug: the disruption can cause financial losses even for people not running the vulnerable software by splitting consensus and causing reorgs. Implementations also tend to frequently reimplement the same bugs even in the rare case where they aren't mostly just copying (or transliterating) code, which is good news for staying consistent but means that even when consistency isn't the concern the hoped-for benefit usually will not materialize. Also network effects tend to keep diversity low regardless, which again is good for consistency, but means but bad for diversity actually doing good. To the extent that diversity can help, it does so primarily through review and monitoring which are better achieved through collaboration on a smaller number of implementations.

Cheers,


Title: Re: The duplicate input vulnerability shouldn't be forgotten
Post by: gmaxwell on September 23, 2018, 05:00:55 PM
For example, right now, a huge majority of the network is running vulnerable software, but because the alert system was removed, there's no way to reach them and tell them to upgrade. We just have to pray they check certain sites. Makes no sense. Some sort of alert feature in software that handles money is a necessity, it could have been enabled by default with an option to disable it. Satoshi was so smart and practical to add that.
You do realize that you're lamenting the removal of a back door key that could be used to remotely crash nodes?   Even without the crash it inherently meant that a single potentially compromised party had the power to pop up to all users of the software potentially misleading messages-- like falsely claiming there were issues that required replacing the software with insecure alternatives. At least when people get news "from the internet" there are many competing information sources that could warn people to hold off.

No thanks.

No one who wants that power should by any account be allowed to have it.  Not if Bitcoin is to really show its value as a decenteralized system.

I doubt doubt that better notification systems can be done... things designed to prevent single bad actors from popping up malicious messages, but even there there is no reason to have such tools directly integrated into Bitcoin itself.  I'd love to see someone create a messaging system that could send message in a way that no single compromised party could send a message or block publication, that messages couldn't be targeted at small groups but had to be broadcast even with the help of network attacks, where people could veto messages before they're displayed, where messages displaying would be spread out over time so users exposed earlier could sound the alarm for a bad message... etc.  But there is no reason for such a system to be integrated into Bitcoin, it would be useful far beyond it.

But it just goes to show that for every complex problem there is a simple centralized solution and inevitably someone will argue for Bitcoin to adopt it.


Title: Re: The duplicate input vulnerability shouldn't be forgotten
Post by: gmaxwell on September 23, 2018, 05:44:52 PM
Perhaps the Core release schedule is too fast. Even though it sometimes already feels painfully slow, there's no particular "need to ship", so it could be slowed down arbitrarily in order to quadruple the amount of testing code or whatever.

I believe slower would potentially result in less testing and not likely result in more at this point.

If we had an issue that newly introduced features were turning out to frequently have serious bugs that are discovered shortly after shipping there might be a case that it would improve the situation to delay improvements more before putting them into critical operation... but I think we've been relatively free of such issues.  The kind of issues that just will be found with a bit more time are almost all already found prior to release.

Quadrupling the amount of (useful) testing would be great.  But the way to get there may be via more speed, not less. You might drive safer if you drove a bit slower, but below some speed, you become more likely to just fall asleep while driving.

Imagine a bridge construction crew with generally good safety practices that has a rare fatal accident. Some government bureaucrat swings in and says "you're constructing too fast: it would be okay to construct slower, fill out these 1001 forms in triplicate for each action you take to prevent more problems".  In some kind of theoretical world the extra checks would help, or at least not hurt.  But for most work there is a set of optimal paces where the best work is done.  Fast enough to keep people's minds maximally engaged, slow enough that everything runs smoothly and all necessary precautions can be taken.  We wouldn't be to surprised to see that hypothetical crew's accident rate go up after a change to increase overhead in the name of making things more safe: either efforts that actually improve safety get diverted to safety theatre, or otherwise some people just tune out, assume the procedure is responsible for safety instead of themselves, and ultimately make more errors.

So I think rather, it would be good to say that things should be safer _even if_ it would be slower. This is probably what you were thinking, but it's important to recognize that "just do things slower" itself will not make things safer when already the effort isn't suffering from momentary oopses.

Directing more effort into testing has been a long term challenge for us,  in part because the art and science of testing is no less difficult than any other aspect of the system's engineering. Testing involves particular skills and aptitudes that not everyone has.

What I've found is that when you ask people who aren't skilled at testing to write more tests when they generally write are rigid, narrow scope, known response tests.    So what they do is imagine the typical input to a function (or subsystem), feed that into it it, and then make the test check for exactly the result the existing function produces.  This sort of test is of very low value:  It doesn't test extremal conditions, especially not the ones the developer(s) hadn't thought of (which are where the bugs are most likely to be),  it doesn't check logical properties so it doesn't give us any reason to think that the answer the function is currently getting is _right_, and it will trigger a failure if the behaviour changes even if the change is benign, but only for the tested input(s). They also usually only test the smallest component (e.g. a unit test instead of a system test), and so they'll can't catch issues arising from interactions.  These sorts of tests can be worse than no test in several ways: they falsely make the component look tested when it isn't, and they create tests that spew false positives as soon as you change something which both discourages improvements and encourages people to blindly update tests and miss true issues. A test that alerts on any change at all can be appropriate for normative consensus code which must not change behaviour at all, but good tests for that need to test boundary conditions and random inputs too. That kind of test isn't very appropriate for things that are okay to change.

In this case, our existing practices (even those of two years ago) would have been at least minimially adequate to have prevented the bug. But they weren't applied evenly enough.  My cursory analysis of the issue suggests that there was a three component failure: The people who reviewed the change had been pre-primed by looking at the original introduction of the duplicate tests which had a strong proof that the test was redundant. Unfortunately, later changes had made it non-redundant apparently with realizing it. People who wouldn't have been snowed by it (e.g. Suhas never saw the change at all, and since he wasn't around for PR443 he probably wouldn't have easily believed the test was redundant) just happened to miss that the change happened, and review of the change got distracted by minutia which might have diminished its effectiveness. Github, unfortunately doesn't provide good tools to help track review coverage. So this is an area where we could implement some improved process that made sure that the good things we do are done more uniformly. Doing so probably won't make anything slower.  Similarly, a more systematic effort to assure that all functionality has good tests would go a long way: New things in Bitcoin tend to be tested pretty well, but day zero functionality that never had tests to begin with isn't always.

It takes time to foster a culture where really good testing happens, especially because really good testing is not the norm in the wider world.  Many OSS and commercial projects hardly have any tests at all, and many that do hardly have good ones. (Of course, many also have good tests too... it's just far from universal.)  We've come a long way in Bitcoin-- which originally had no tests at all, and for a long time only had 'unit tests' that were nearly useless (almost entirely examples of that kind of bad testing). Realistically it'll continue to be slow going especially since "redesign everything to make it easier to test well" isn't a reasonable option, but it will continue to improve. This issue will provide a nice opportunity to nudge people's focus a bit more in that direction.

I think we can see the negative the effect of "go slower" in libsecp256k1.   We've done a lot of very innovative things with regard to testing in that sub-project, including designing the software from day one to be amenable to much stronger testing and had some very good results from doing so.  But the testing doesn't replace review, and as a result the pace in the project has become very slow-- with nice improvements sitting in PRs for years. Slow pace results in slow pace, and so less new review and testing happen too. ... and also fewer new developments that would also improve security get completed.

The relative inaccessibility of multisig and hardware wallets are probably examples where conservative development in the Bitcoin project have meaningfully reduced user security.

It's also important to keep in mind that in Bitcoin performance is a security consideration too.  If nodes don't keep well ahead of the presented load, the result is that they get shut off or never started up by users, larger miners get outsized returns,  miners centralize, attackers partition the network by resource exhausting nodes.  Perhaps in an alternative universe where it in 2013 when it was discovered that prior to 0.8 nodes would randomly reject blocks under ~>500k if the block size were permanently limited to 350k we could have operated as if performance weren't critical to the survival of the system, but that it's what happened. So, "make the software triple check everything and run really slowly" isn't much of an option either especially when you consider that the checks themselves can have bugs.


Title: Re: The duplicate input vulnerability shouldn't be forgotten
Post by: Cøbra on September 23, 2018, 05:52:11 PM
For example, right now, a huge majority of the network is running vulnerable software, but because the alert system was removed, there's no way to reach them and tell them to upgrade. We just have to pray they check certain sites. Makes no sense. Some sort of alert feature in software that handles money is a necessity, it could have been enabled by default with an option to disable it. Satoshi was so smart and practical to add that.
You do realize that you're lamenting the removal of a back door key that could be used to remotely crash nodes?   Even without the crash it inherently meant that a single potentially compromised party had the power to pop up to all users of the software potentially misleading messages-- like falsely claiming there were issues that required replacing the software with insecure alternatives.

No thanks.

No one who wants that power should by any account be allowed to have it.  Not if Bitcoin is to really show its value as a decenteralized system.

But it just goes to show that for every complex problem there is a simple centralized solution and inevitably someone will argue for Bitcoin to adopt it.

Bitcoin is the consensus rules and protocol, which Bitcoin Core implements, but Bitcoin is not synonymous with everything in Bitcoin Core. The alert system was never part of the consensus rules, and therefore doesn't have anything to do with Bitcoin, so being excessively concerned with decentralization makes no sense in this context.

Maybe the previous implementation of an alert system was flawed, with the crashing bugs, and the alert key being controlled by dubious people, but this doesn't make alerts in and of themselves a terrible idea. The unlikely event of the alert key being abused are overcome by having a "final" alert broadcast, any harm an attacker could do is minimal. The benefits far outweigh the risks here. But somehow allowing for thousands of users to unknowingly be put into situations where they're running insecure financial software is acceptable for you so long as you believe it improves some vague notion of decentralization (it doesn't). Let's be glad this bug wasn't something that could be exploited to hack into nodes remotely and steal funds, like a buffer overflow or use-after-free.


Title: Re: The duplicate input vulnerability shouldn't be forgotten
Post by: gmaxwell on September 23, 2018, 06:06:18 PM
and therefore doesn't have anything to do with Bitcoin,
The fact that it created real vulnerabilities that could have been used to cause massive funds loss, even for people who removed the code, suggests otherwise!  Something doesn't have to be part of consensus rules directly to create a risk for the network.

The general coding style in Bitcoin makes RCEs unlikely, or otherwise I would say that I would be unsurprised if the ALERT system didn't have one at least at some point in its history... It was fairly sloppy and received inadequate attention and testing because of its infrequent use and because only a few parties (like the Japanes government ... :( ) could make use of it.

and I say this as one of the uh, I think, three people to have ever sent an alert.


Title: Re: The duplicate input vulnerability shouldn't be forgotten
Post by: Cøbra on September 23, 2018, 06:08:15 PM
Quote
In an effort to increase communications, we are launching an opt-in, announcement-only mailing-list for users of Bitcoin Core to receive notifications of security issues and new releases.

https://bitcoincore.org/en/2016/03/15/announcement-list/

Apparently, according to Luke-jr who's subscribed, Bitcoin Core still hasn't even bothered to use their announcement mailing list (specifically set up for security issues and new releases) to warn about CVE-2018-17144 yet (a security issue fixed in a new release no less!). That's pretty incompetent if you ask me.

https://twitter.com/LukeDashjr/status/1043917007303966727


Title: Re: The duplicate input vulnerability shouldn't be forgotten
Post by: gmaxwell on September 23, 2018, 06:13:16 PM
Apparently, according to Luke-jr who's subscribed, Bitcoin Core still hasn't even bothered to use their announcement mailing list (specifically set up for security issues and new releases) to warn about CVE-2018-17144 yet (a security issue fixed in a new release no less!). That's pretty incompetent if you ask me.
The announcement of the new release and fix was sent to it.  You mean the guy who controls bitcoin.org isn't subscribed to it? That' s pretty ... suboptimal, if you ask me. :)

It's easy to point fingers and assert that the world would be better if other people acted according to your will instead of their own free will,  harder to look at your own actions and contemplate what you could to to improve things.  To risk making the same mistake myself: Instead of calling people who owe you nothing incompetent, you could offer to help...  Just a thought.  It's the advice that I've tried to follow myself, and I think it's helped improve the world a lot more than insults ever would.


Title: Re: The duplicate input vulnerability shouldn't be forgotten
Post by: aliashraf on September 23, 2018, 07:09:48 PM
I suggest we are done with both multiple competing implementations and alert system. Aren't we?

If yes, it is time to ask the critical questions once again:

What is there to be learned from CVE-2018-17144 incident? What measures should be taken hereafter?

I'm not used to it but exceptionally, I agree with Greg Maxwell here, multiple implementations is a stupid idea and removal of alert system in bitcoin was a correct decision, imo,  but I think it is more about what is right and what to do rather than rejecting false ideas and pointing out what should not be done.

 


Title: Re: The duplicate input vulnerability shouldn't be forgotten
Post by: Cøbra on September 23, 2018, 07:16:22 PM
It's easy to point fingers and assert that the world would be better if other people acted according to your will instead of their own free will,  harder to look at your own actions and contemplate what you could to to improve things.  To risk making the same mistake myself: Instead of calling people who owe you nothing incompetent, you could offer to help...  Just a thought.  It's the advice that I've tried to follow myself, and I think it's helped improve the world a lot more than insults ever would.

I think this discussion is getting away from the general topic, but you were recently someone who would attack other development teams trying to "improve the world" in their own way with even more harsh terms and toxic insults. I don't think polite discourse is your strong suit either when you are known for pointing fingers at people that fork your project.

What I seem to be getting from your posts is that mostly every change is a consensus risk, and that development should move faster or at the same pace, but not slower. You claim that moving slower would potentially result in less testing, which makes no sense. You can simply have a feature or optimization on a test branch, encourage users to mess with it (by actually interacting with the feature, as awemany was doing), possibly even with some sort of bounty scheme, and once sufficient time has passed and lots of different people and companies in the industry have tried to break it, you merge it in. I can't fathom how moving slower wouldn't help here.


Title: Re: The duplicate input vulnerability shouldn't be forgotten
Post by: gmaxwell on September 23, 2018, 07:50:05 PM
but I think it is more about what is right and what to do rather than rejecting false ideas and pointing out what should not be done.
I think I did speak to it some.

More directly: Rather than shy from danger we must continue to embrace it and manage it.

The safest change is a change that you think is dangerous but is actually safe, the next most safe is a change that is dangerous and you know is dangerous, and least safe is one you think is safe but is actually dangerous. There is pretty much no such thing as a change you think is safe and actually is safe. There is also no such thing as a failure to change something that should be changed that is safe, because the world doesn't stay constant and past reliability doesn't actually mean something was free from errors.

Obviously danger for the sake of danger would be dumb, so we should want to make a reasonable number of known dangerous changes, which are clearly worth it, and that justify the review and testing needed to make them safe enough which will, along the way, find issues we didn't know we had too.

If instead people feel burned by this issue and shy even further away from dangerous changes the activity wouldn't justify due care and eventually we get burned by a safe change that isn't actually safe.

As mentioned, even the complete absence of changes isn't safe, since without changes old vulnerabilities, and security relevant short comings don't get addressed (e.g. I mentioned multisig, hardware wallets, but there are many others).

Not to point fingers-- especially as there was no casual relationship in this instance, but it occurs to me that in many recent posts you've advocated for radically reducing the interblock interval-- an argument that is only anything but laughable specifically because of a multitude of changes like 9049 that introduced this bug, squeezing out every last ounce of performance (and knocking the orphan rate at 10 minutes from several percent to something much smaller).  If our community culture keeps assuming that performance improvements come for free (or that the tradeoffs from load can be ignored and dealt with later) then they're going to keep coming at the expense of other considerations.

As far as concrete efforts go, it's clear that there needs to be a greater effort to go systematically through the consensus rules and make sure every part of the system has high quality testing-- even old parts (which tend to be undertested), and that as needed the system is adjusted to make higher quality testing possible. To achieve that it will probably also be necessary to have more interaction about what constitutes good testing, I'll try to talk to people more about what what we did in libsecp256k1 since I think its much closer to a gold standard. I am particular fond of 'mutation' style testing where you test the tests by introducing bugs and making sure the tests fail, but it's tricky to apply to big codebases, and is basically only useful after achieving 100% condition-decision test coverage. Here (https://github.com/bitcoin/bitcoin/pull/10195#issuecomment-302839091) is an example of me applying that technique in the past with good results (found several serious shortcoming in the tests and found an existing long standing bug in another subsystem as a side-effect).

I think there is also an opportunity to improve the uniformity of review but I'm not quite sure how to go about doing that: E.g. checklists have a bad habit of turning people into zombies, but there are plenty of cases where well constructed ones have had profoundly positive effects. Unfortunately we still just don't have enough really strong reviewers.  That cannot be improved too quickly simply because it takes a lot of experience to become really effective. We have more contributors now than in the past though, so there is at least the potential that many of the new ones will stick around and mature into really good reviewers.

We also shouldn't lose perspective of the big things that we're getting right in other ways and the limitations of those approaches.  So for example, 9049 introduced a fault in the course of speeding up block propagation.  One alternative we've used since 2013 is implementing fast propagation externally: Doing so reduces the risk that new propagation stuff introduces bugs and also lets us try new techniques faster than would be possible if we had to upgrade the whole network.  Bitcoin Fibre has a new, radically different, ultra-super-mega-rocket science approach to propagating blocks. Fibre protocol's implementation has also been relatively chalk full of pretty serious bugs, but they haven't hurt Bitcoin in general because they're separate.  Unfortunately, we've found that with the separation few parties other than Matt's public fibre network run it, creating a massive single point of failure where if his systems are turned off block propagation speed will drop massively.  To address that, we've been creating simplfied subsets of that work, hardening them up, making them safe, and porting them to the network proper-- BIP152 (compact blocks) is an example of that.  A gap still exists (fibre has much lower latency than BIP152) and the gap is still a problem, since propagation advantages amplify mining centralization (as well as make selfish mining much more profitable)... but it's less of one.  This is just one of the ways we've all managed the tension between a security driven demand to improve performance and a security driven demand to simply and minimize risky changes. There just isn't a silver bullet.  There are lots of things we can do, and do do, but none magically solve all the problems. The most important thing is probably that we accept the scope of the challenge and face it with a serious and professional attitude, rather than pretending that something less than a super-human miracle is good enough. :)



Title: Re: The duplicate input vulnerability shouldn't be forgotten
Post by: gmaxwell on September 23, 2018, 08:04:04 PM
but you were recently someone who would attack other development teams trying to "improve the world" in their own way with even more harsh terms and toxic insults
It would be more productive if you were specific instead of vague. The vague allegations, devoid of context, just come across as toxic themselves-- a character attack, rather than a complaint about something specific that could be handled better.

I believe my suggested improvements to you above-- subscribe to the list, look for things things you can do to help-- were specific, actionable, well justified, and hopefully not at all personally insulting.

Quote
encourage users to mess with it, possibly even with some sort of bounty scheme, and once sufficient time has passed and lots of different people and companies in the industry have tried to break it, you merge it in
In general people ignore "test" things, they just don't interact with them at all unless the thing in question is on rails that will put them somewhere critical reasonably soon.

We see this pattern regularly, when people put up complex PRs and explicitly mark them as not-merge-ready they get fairly small amounts of substantive review, and then after marking them merge ready suddenly people show up and point out critical design mistakes. Resources are finite so generally people prefer to allocate them for things that will actually matter, rather than things that might turn out to be flights of fancy that never get finished or included. They aren't wrong to do it either, there are a lot more proposals and ideas than things that actually work and matter.  You can also see it in the fact that far more interesting issues are reported by users using mainnet rather than testnet.  This isn't an argument to not bother with pre-release testing and test environments: both do help. But I think that we're likely already getting most of the potential benefit out of these approaches.

"testing" versions also suffer from a lack of interaction with reality. Many issues are most easily detected in live fire use, sadly.  And many kinds of problems or limitations are not the sort of thing that we'd rather never have an improvement at all than suffer a bit of a problem with it.

Additionally, how much delay do you think is required? This issue went undetected for two years. Certainly with lower stakes we could not expect a "testing" version to find it _faster_.  Sticking every change behind a greater than two year delay would mean a massive increase in exposure from the lack of beneficial changes, not to mention the knock on cultural effects: I should have cited it specifically, but consider the study results suggesting that safety equipment like bike helments (http://mentalfloss.com/article/73670/paradoxical-ways-bike-helmets-make-us-less-safe) or seatbelts sometimes result in less safe behavior. Checklists can be valuable, for example, but have also been showen to cause a reduction in personal responsibility and careful consideration. "I don't have to worry about reviewing this thing, because any issues will be found by the process and/or will be someone elses problem if they get through".

As I said above, if we had a series of concerning issues being found shortly after releases it might be a good case for going more slowly, generally. But isn't the case. With only few exceptions when we find issues that are concerning enough to justify a fast fix they're old issues.  We do find many issues quickly, but before they're merged or released.

I've generally found that for software quality each new technique has its own burst of benefit, but continued deeper application of the technique has diminishing marginal returns. Essentially, any given approach to quality has a class of problems it stops substantially while being nearly blind to other kinds of issues. Even limited application tends to pick the low hanging fruit.  As a result its usually better to assure quality in many different ways by many different people, rather than dump inordinate effort into one or a few techniques.

Quote
You claim that moving slower would potentially result in less testing, which makes no sense.[...] I can't fathom how moving slower wouldn't help here.
I'm disappointed, I think I explained directly and via analogy as to why this is the case but it doesn't seem to have been communicated to you. Perhaps someone else will give a go at translating the point, if its still unclear. :(


Title: Re: The duplicate input vulnerability shouldn't be forgotten
Post by: aliashraf on September 23, 2018, 11:12:16 PM
Greg Maxwell,
I truly appreciate your responsible presence here, but let me to shed a different light on this issue. The way I understand the problem we have multiple factors involved:

1- Bitcoin has seamlessly grown from an experimental project to a mission critical system. A critica vulnerability exploit is totally intolerable and will cause serious damages to a large population. It is what have agitated and reminded us about the critical aspects of the issue and has made theymos to start this topic.

2- Although as a consensus based protocol bitcoin is a social contract that inherently resists change and treats it as an existentially paradoxical subject, evolution is inevitable just like any other live system. It suggests the probability of an unusual development process beyond common practices familiar in software engineering.

3- As the first widely adopted decentralized system which surprisingly happened to be implemented in one of the most sensitive fields ever: monetary systems, bitcoin is an unprecedented social experiment. As much as it is exciting, there exist consequences with the most critical one being the ambiguity in governance.

4- Any system with governance problem is suspected to lose direction and being subject to technocratic decision makings. Lack of vision and strategy escalates spars, discreet developments which are typically reduced to pure technical processes.


5- Both as a result of decentralization and governance issues on one hand and because of a tradition based on immutability of blockchain data as an initial requirement in bitcoin on the other hand, the client software is supposed to be downward compatible,  the original blockchain is to be maintained and bootstrapping fresh nodes from genesis block to the whole chain should be supported. As a result, developers and contributors use sophisticated techniques to comply with this requirements.

6- Because of the last two factors, bitcoin now suffers from a software engineering problem: software bloat. Through years, incremental developments plus efforts for keeping system downward compatible and insisting on soft forks against hard forks has ended to a situation in which bitcoin code is becoming more and more complicated and hard to understand/contribute and maintain.

I know you can argue in favor of softforks or downward compatibility or bootstrapping very persuasively but everything comes with a price and if bitcoin was a simple centralized system with no governance issues, we would have it completely redesigned and rewritten from scratch 2 or 3 times in almost 10 years after its first release, in the most conservative scenario. So, it is not about how good is downward compatibility,  we are just used to suppose there is no other option.

Now we are here. We can't  disrupt governance situation but there are definite reconsideration opportunities that we have to embrace. For instance suppose somebody (a reckless person like me ;) ) suggests maintaining blockchain since Genesis and bootstrapping from it is not necessary and we could use a snapshot of UTXO which after being confirmed by like 10,000 blocks cumulatively, the historical data behind it would become relaxed.

As a common practice and before CVE-2018-17144 It would be a controversial proposal and won't get any support because nobody was aware of the critical situation with code from a software engineering point of view and the impact of such a proposal on making code an order of magnitude more elegant and straightforward would not be weighed properly.

Now, in post CVE-2018-17144 era, we have to be ways more open to any proposal that helps making/keeping the code as smart and compact as possible. It is a general paradigm shift, we should embrace it and help it to happen with the least possible casualties and disasters. In this new era code simplicity and beauty comes first, and we MUST put it in the top of our priority list.





Title: Re: The duplicate input vulnerability shouldn't be forgotten
Post by: Cøbra on September 23, 2018, 11:30:37 PM
It would be more productive if you were specific instead of vague. The vague allegations, devoid of context, just come across as toxic themselves-- a character attack, rather than a complaint about something specific that could be handled better.

You have attacked me with character attacks by viciously claiming on Reddit that I had sold my credentials, and even sent me an e-mail out of the blue one day asking how much I "sold out" for. I don't have time to look through your extensive post history on /r/btc, but I remember you spent years wrestling with pigs and constantly harassing and deriding people that wanted to "improve the world". I attacked these people too in similar ways, and many of them were incompetent, but I think you out of all people aren't in a position to preach the value of polite discourse, since you're one of the more toxic/controversial figures in the Core team.

I'm disappointed, I think I explained directly and via analogy as to why this is the case but it doesn't seem to have been communicated to you. Perhaps someone else will give a go at translating the point, if its still unclear. :(

Yes, this was your analogy:

Imagine a bridge construction crew with generally good safety practices that has a rare fatal accident. Some government bureaucrat swings in and says "you're constructing too fast: it would be okay to construct slower, fill out these 1001 forms in triplicate for each action you take to prevent more problems".  In some kind of theoretical world the extra checks would help, or at least not hurt.  But for most work there is a set of optimal paces where the best work is done.  Fast enough to keep people's minds maximally engaged, slow enough that everything runs smoothly and all necessary precautions can be taken.  We wouldn't be to surprised to see that hypothetical crew's accident rate go up after a change to increase overhead in the name of making things more safe: either efforts that actually improve safety get diverted to safety theatre, or otherwise some people just tune out, assume the procedure is responsible for safety instead of themselves, and ultimately make more errors.

This analogy is flawed and makes no sense. Bridge construction is completely different from software engineering through the open source process. Construction is a linear thing, you can't build multiple "prototypes" of real physical bridges and test them and choose between them, you need to build the entire thing once, and in this context you would be correct that it makes more sense to keep minds maximally engaged. But in Bitcoin Core, developers can work in their own branches with total freedom, and no red tape, so I fail to see how they wouldn't be engaged? There's nothing stopping them from working "optimal paces" in their own branch and then opening a pull request after their sprint to try to get the change merged in. There already exists a testing/review step, IMO there's no harm in making this step slightly longer and encouraging the community to try to break and mess up a new feature. Bounties can be paid to try to break stuff too.

Anyway I'm exiting this discussion because I feel like we're going to go around in circles and derail the thread, and I've said what I wanted to say. I think we should take the personal issues to PM or something. Cheers.


Title: Re: The duplicate input vulnerability shouldn't be forgotten
Post by: gmaxwell on September 24, 2018, 12:57:31 AM
You have attacked me with character attacks by viciously claiming on Reddit that I had sold my credentials,

This (https://www.reddit.com/r/Bitcoin/comments/82ofw9/cobrabitcoin_must_step_down_asap_from_bitcoinorg/dvcgmsb/) is the post that you're referring to:

Quote
I get the impression that cobra sold his credentials last year: He put up some sketchy warnings about the binaries on bitcoin.org then went quiet for a long time.

When he came back he started posting some really over the top rbtc conspiracy theory nonsense on reddit. When people moved to take action about it he suddenly said "oh my account was hacked" and dropped it. But the account wasn't use for the kind of petty vandalism that you normally see when a hacked account can't otherwise be used... Since then he's been slowly cranking up the psycho behavior, and right now he seems not far from the sudden behavior of the 'hacked' account.

Given that I'm not surprised to see the BCH pumping, and of course ignoring that whatever "better for payments" argument you can currently make for BCH could better be applied to Litecoin (which also has a lower interblock interval, AND segwit) and yet litecoin has mostly gone nowhere.

FWIW, no one wants a POW change more than Bitmain. They crank out chips privately for even obscure POWs then dump them on the public once they've reached diminishing returns on their own production. With sha256d they're competing against a huge installed base. Moreover, Bitmain has gone around unethically and unlawfully claiming patents on basic mining techniques like series wiring the chips to reduce convert costs which were in use prior to Bitmain and where any competitive mining device for any POW would adopt the same techniques.

I apologize for insulting you, it was really not my goal.  I'm not sure what else I was supposed to think when one day you're asking blockstream for money (and suggesting bitcoin.org/maybe you were broke) and then later [edit: I thought it was a few weeks, but it may just be that I only noticed the message then or there might have been more than one. I no longer have my blockstream account so I can't tell] started posting things like Merchant adoption will come naturally once people realize that the other coin is crippled by Blockstream and /u/nullc and that they can't transact without paying outrageous transaction fees. [...] Of course bitcoin.org should be changed to embrace Bitcoin Cash. Blockstream coin is not Bitcoin. [...] It's a form of censorship by Blockstream Core.  [...] This is what AXA invested in them for, to cripple the network.  (http://archive.is/mrXQY). I'd never seen you say anything like that before. And even more recently you continue to say things that look a lot like (https://twitter.com/cobrabitcoin/status/994351375457947648) it to me, also (https://twitter.com/CobraBitcoin/status/1022110974856314881), also (https://twitter.com/CobraBitcoin/status/995809925803790337), also (https://twitter.com/CobraBitcoin/status/1023273672881197056) (especially weird since you yourself told us downloads on bitcoin.org were unsafe, you seemed to think alternative downloads were a good idea, and then a year later are angry about it), also (https://twitter.com/CobraBitcoin/status/1042088732281774083), also (https://twitter.com/CobraBitcoin/status/1022110974856314881).   It's okay for you to go around suggesting "compromised by the NSA" without not a single shred of evidence, but  you think it's toxic for me to say that I "get [an] impression" and point out an apparent radical change in your behaviour? :(

Theymos says he thinks you've been consistent all along, I trust him to know, but it's not like my comments were coming out of nowhere.  I had no reason to dislike you previously, in fact almost the sum total of my other interaction with you was stepping up to defend you when I thought people were unfairly attacking you after you said something easily misunderstood (like the 'revise the whitepaper' thing).

Why do you find it so insulting that I wondered if you sold your credentials -- with an explanation of my concerns-- after you start attacking someone whos done AFAIK nothing but support you previously but seem to think it's okay to spread worse claims about other people?

I'd say "what would you do in my shoes"-- but it seems like the answer is that you'd make accusations and not even provide evidence.  Is that really your intent?

Quote
but I remember you
Memory is a tricky thing. In fact, when writing the above I thought you multiple times also posted additional things that turned out to actually be people that responded to the things above agreeing with your attacks, but which weren't actually said by you-- sorry about that, but at least I haven't accused you of those things because I actually checked.

I am one of the only project contributors who actually takes the time to even try to communicate with people who seem to be significantly confused. 99.9% of the time other people will just ignore them completely. I don't think it helps improve the level of discourse if everyone puts themselves in a high castle and doesn't even hear out their opposition or respect them enough to even argue the case.  But at least when I'm critical of your actions I'm willing to be precise enough that you can defend or contextualize them... or even admit a mistake.  I've certainly made mistakes, but at least I've tried to do something good. I fee like your comments here-- with name calling like "incompetent"-- are saying that you'd prefer a world where no one does anything (except maybe insult and conspiracy theorize about others), because if the they do enough good things you'll ignore all that and attack them for the few things that could be improved.  If that isn't what you're going for, I'd really like you to help me understand where you're coming from.

Quote
But in Bitcoin Core, developers can work in their own branches with total freedom, and no red tape,
And no review, which was my point.

Quote
IMO there's no harm in making this step slightly longer
Perhaps, but I don't see how slightly longer connects with your post. Already the major release cycle is six months long.  This issue took two years to discover, making the cycle seven months long would not have made it get detected. But it might plausibly make people take review less seriously.  I guess my point there is that we've already made it a lot more than slightly longer, and tapped out the benefit from doing so, further increases might tip us further into the realm of costs exceeding benefits and there are other things we can do that don't add delays but would do more to prevent serious problems.


Title: Re: The duplicate input vulnerability shouldn't be forgotten
Post by: chek2fire on September 24, 2018, 01:14:22 AM
I like to add my opinion here to this very interest discussion.
Maybe is time Bitcoin to focus again to be stable unbanking safe heaven money than to be another funky cryptocurrency. For this reason imo Bitcoin must have rarely radical upgrades.
It seems the last two years Bitcoin devs forced by this ridiculous fork and big block drama to be more creative, to prove to everyone that they can bring Bitcoin to the next level, an uneccesry movement imo because bitcoin was not design to be a funky cryptocurrency but a stable decentralised money that everyone can trust value to it.
Unfortunately radical changes to code for 100% will bring and bugs. This happen in every software engineer history even form the most brilliant developers.

Quote
---offtopic---
Cobra account seems to be controlled by different persons and this is the reason why this account has so many different reactions, even his twitter account forget about two month campaign against Halong mining . This account is one other of the many paradoxs in Bitcoin world :P. Is a very funny paradox for sure :D
https://pbs.twimg.com/media/DnZeAqAV4AAJqac.jpg
https://pbs.twimg.com/media/DnZeFeeUUAEeBxQ.jpg


Title: Re: The duplicate input vulnerability shouldn't be forgotten
Post by: Cøbra on September 24, 2018, 03:07:38 AM
I apologize for insulting you, it was really not my goal.  I'm not sure what else I was supposed to think when one day you're asking blockstream for money (and suggesting bitcoin.org/maybe you were broke) and then a few weeks later started posting things like Merchant adoption will come naturally once people realize that the other coin is crippled by Blockstream and /u/nullc and that they can't transact without paying outrageous transaction fees. [...] Of course bitcoin.org should be changed to embrace Bitcoin Cash. Blockstream coin is not Bitcoin. [...] It's a form of censorship by Blockstream Core.  [...] This is what AXA invested in them for, to cripple the network.  (http://archive.is/mrXQY).

You apologize, only to spit in my face with more vicious attacks. My Reddit account was compromised at that time, but I quickly regained access to it. I told this fact to a few people who contacted me in concern, and thought the issue was put to rest, but it turns out it's being intentionally resurfaced to discredit me.

And "asking Blockstream for money" because I was "broke"? Seriously? I contacted a whole bunch of businesses about sponsorships for bitcoin.org, something I've done for a while. I've pasted the email below. Your timeline of events is wrong, the compromised posts are from Sept 2017, but this email was sent in May 2016. So it wasn't "a few weeks later" that my Reddit account posted those things. You are being intentionally deceptive, vague and making up timelines to make me seem more erratic and malicious. I might be distrustful of Blockstream (I don't trust most American technology companies, and I didn't trust the Foundation too much either), but when you make up timelines, misconstrue things, and behave like an amateur NSA PSYOP agent, it doesn't help your case.

Back on topic, I think there's two sets of Core users: those who run their node and rarely update it, and the more enthusiastic ones who keep up with upgrades. It might make sense to have a LTS version with more thoroughly tested and vetted consensus critical code (that's proven itself), and a regular version. I think more choice and flexibility could be useful here.

Obviously it won't catch all bugs, there will always be bugs, but it might help minimize it. Though with all the eyes on the code now after the recent bug, especially around optimizations, maybe more people will be ready to point out critical flaws.

Quote
Date: Mon, 30 May 2016 11:38:52 +0000
From: =?UTF-8?Q?C=C3=B8bra?= <domain@bitcoin.org>
To: <inquiries@blockstream.com>
Cc: "Gregory Maxwell" <greg@xiph.org>
Message-Id: <15501759afd.df43a501857.5742453511502958330@bitcoin.org>
Subject: Bitcoin.org Sponsorship
MIME-Version: 1.0
Content-Type: multipart/alternative;
    boundary="----=_Part_2033_1126291026.1464608332543"
X-Priority: Medium
User-Agent: Zoho Mail
X-Mailer: Zoho Mail
X-ZohoMail-Sender: Cøbra

------=_Part_2033_1126291026.1464608332543
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 7bit

Hey,

Bitcoin.org is currently looking for a new sponsor (previously we were sponsored by the Bitcoin Foundation) and we were wondering if Blockstream would be interested in supporting the site financially. The site continues to get large increases in traffic, and we want to ensure that the site remains fast, online and secure well into the future. Bitcoin.org is the first place most new users go to learn about bitcoin, it teaches them how Bitcoin works, and helps them get set up with a wallet. The site's content has been translated into many languages, and any user is free to make a pull request on Github to improve the site.


If this opportunity is something that interests you, then please let me know, and we can discuss further the details of a sponsorship arrangement. Thanks.


------=_Part_2033_1126291026.1464608332543
Content-Type: text/html; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable

<!DOCTYPE html PUBLIC "-//W3C//DTD HTML 4.01 Transitional//EN"><html><head>=
<meta content=3D"text/html;charset=3DUTF-8" http-equiv=3D"Content-Type"></h=
ead><body ><div style=3D'font-size:10pt;font-family:Verdana,Arial,Helvetica=
,sans-serif;'>Hey,<div><br></div><div>Bitcoin.org is currently looking for =
a new sponsor (previously we were sponsored by the Bitcoin Foundation) and =
we were wondering if Blockstream would be interested in supporting the site=
 financially. The site continues to get large increases in traffic, and we =
want to ensure that the site remains fast, online and secure well into the =
future. Bitcoin.org is the first place most new users go to learn about bit=
coin, it teaches them how Bitcoin works, and helps them get set up with a w=
allet. The site's content has been translated into many languages, and any =
user is free to make a pull request on Github to improve the site.</div><di=
v><br></div><div>If this opportunity is something that interests you, then =
please let me know, and we can discuss further the details of a sponsorship=
 arrangement. Thanks.</div></div></body></html>
------=_Part_2033_1126291026.1464608332543--


Title: Re: The duplicate input vulnerability shouldn't be forgotten
Post by: Hueristic on September 24, 2018, 04:03:48 AM
...I am one of the only project contributors who actually takes the time to even try to communicate with people who seem to be significantly confused. 99.9% of the time other people will just ignore them completely. I don't think it helps improve the level of discourse if everyone puts themselves in a high castle and doesn't even hear out their opposition or respect them enough to even argue the case.  But at least when I'm critical of your actions I'm willing to be precise enough that you can defend or contextualize them... or even admit a mistake.  I've certainly made mistakes, but at least I've tried to do something good. I fee like your comments here-- with name calling like "incompetent"-- are saying that you'd prefer a world where no one does anything (except maybe insult and conspiracy theorize about others), because if the they do enough good things you'll ignore all that and attack them for the few things that could be improved.  If that isn't what you're going for, I'd really like you to help me understand where you're coming from....


OT:
I would just like to point out that I, and I'm sure there are many others, appreciate the effort you put into responding even though we do not insert ourselfs into the conversation because we know that we are not well enough versed in the code and/or game theory. For everyone in the conversation there are thousands that did not ask the same question no matter what it was. So I'm just posting that it is appreciated that you spend the time to do so. I also appreciate the fact that when I go back and research something I usually find a post from you and that is a post that I can take to the bank. Ok, enough brown nosing, keep up the good work. :)


Title: Re: The duplicate input vulnerability shouldn't be forgotten
Post by: Quickseller on September 24, 2018, 04:08:59 AM
Obviously, it would be much safer for a community to take care of one implementation with fewer lines of codes.
I don't think it is necessarily best to rely on the "community" to ensure that each implementation of a bitcoin node is secure/safe to use.

Members of the community might have, at most a few million dollars worth of bitcoin of their own money at stake, but even if they make a mistake, they are unlikely to personally lose any money. On the other hand, there are several bitcoin related businesses that have billions of dollars worth of customer money, and hundreds of millions (and in some cases billions) of dollars of equity who have serious incentives to ensure these types of bugs don't pop up with software in production, and they have incentives to have fail-safes in place to prevent any actual losses if/when these types of bugs make it through the cracks.

I would point out that I am not aware of any major exchange "pausing" deposits and/or withdrawals immidiately after this bug was discovered, however anyone running the relevant software would have taken some time to stop deposits/withdrawals to upgrade their nodes (which would include reviewing the code). This leads me to believe that the majority of exchanges/businesses are running their own custom node software, maybe not exclusively, but this is at least part of what they are running. 



Title: Re: The duplicate input vulnerability shouldn't be forgotten
Post by: RHavar on September 24, 2018, 04:11:45 AM
One thing I've always wanted to do -- but have never had the energy for -- was to run multiple implementations of bitcoin (e.g. btcd and bitcoin core) and only transact while they are in agreeance.

For most bitcoin businesses, a few hours of not processing deposits/withdrawals is actually not a big deal, and happens pretty regularly anyway (for no fault of bitcoin itself). While on the other hand transacting after an accidental chain-split could really be devastating.


Title: Re: The duplicate input vulnerability shouldn't be forgotten
Post by: gmaxwell on September 24, 2018, 04:19:15 AM
run multiple implementations of bitcoin (e.g. btcd and bitcoin core) and only transact while they are in agreeance.
Monitoring that way can be interesting (use old versions too)... but running them anywhere near proximity to production machines may increase the risk of RCEs and resource exhaustion attacks. Though since you only need a yes/no from the monitoring it could be isolated without too much trouble.  If this were considered a best practice, though, it would further increase the barrier of entry for participation.

You are a lot more advanced than many Bitcoin using businesses: you actually report bugs and help test fixes. For many others, it's remarkable if they do anything more than call out to a bc.i api. Someone on IRC was pointing out the rather disappointing number of bitcoin sites that were currently managing to expose the bitcoind rpc to the public internet.  :(


Title: Re: The duplicate input vulnerability shouldn't be forgotten
Post by: btc_enigma on September 24, 2018, 06:00:36 AM
I know this is probably the last argument most people want to hear, but is this not a case where more independent implementations would result in less risk?  If you maintain that one particular client should form the "backbone of the network", you have to consider what happens if that backbone breaks.  If there were a wider variety of clients being run, there may have been less of a threat to the network overall?

Core have done exceptional work, but at the end of the day, they're still only human.  Assigning more people to keep an eye on one codebase might help mitigate faults, but if there's only one main codebase, there's still going to be an issue if an error slips through the net.  Hence my belief that more codebases would create a stronger safeguard.
What many people do not realize is that having people run different implementations makes it easier for attackers to partition the network and thus harder to resolve situations where vulnerabilities are exploited. Network partitioning can cause multiple blockchain forks which is a much harder situation to resolve than a single fork or an entire network shutdown.  It is not just that some nodes will go down and the rest are up and the network is still running. If the attack is directed in a certain way, miners will be separated and no longer connected to each other which then causes forks. Network partitioning is a serious issue, and running different implementations makes it easier for attackers to partition the network. So having multiple implementations and recommending that people run alternative software is really not a good thing.

That being said, having multiple implementations is good for the individual who runs multiple nodes with different implementations. With multiple nodes each with different software, attacks exploiting critical bugs lets them know if an attack is going on. If everyone ran multiple nodes with different implementations, then multiple implementations are fine. The network would not shutdown and there wouldn't be any network partitioning. But not everyone is going to do that.


Excellent point here. Totally agree the risk of managing multiple forks is much greater

What I also wanted to add is why this bug was not detected in testnet3 ? While we don't expect all bug to be detected during code review, most network/consesus related bugs should be detected in testnet

Perhaps the less use/importance of testnet is a cause of worry. If testnet3 mirrors mainnet in size and traffic (https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2018-August/016341.html), and  we give 6 months staging time for each release on testnet, we could have a more robust protection.  


Title: Re: The duplicate input vulnerability shouldn't be forgotten
Post by: kain134 on September 24, 2018, 06:10:42 AM
/me eating popcorn.

Maybe it's time to promote more Bitcoin Knots and other node software that is not Core? Seems to me like having multiple implementations that are not forks of another another will provide a resilience against bugs.



Title: Re: The duplicate input vulnerability shouldn't be forgotten
Post by: AGD on September 24, 2018, 06:56:25 AM
I hope we will end up with a solution, that is focused on more testing AND the advancement of new coding talents, by the regulars - maybe through Bitcoin specific coding challenges/training, bounties etc.
Also, if coders are being paid by a company, that is depending on a healthy Bitcoin network, this is obv. good for everyone in the network.

I also see an extended testing as the best solution to the intentional injecting of bad code by infiltrated state/organisation/company actors, which is probably already taken place at this time.



Title: Re: The duplicate input vulnerability shouldn't be forgotten
Post by: _biO_ on September 24, 2018, 11:44:00 AM
/me eating popcorn.

Maybe it's time to promote more Bitcoin Knots and other node software that is not Core? Seems to me like having multiple implementations that are not forks of another another will provide a resilience against bugs.



1) Bitcoin Knots *is* Bitcoin Core plus some enhancements and perhaps different defaults, AFAIK.

2) Read the thread. Multiple implementations *increase* risk.


Title: Re: The duplicate input vulnerability shouldn't be forgotten
Post by: Lauda on September 24, 2018, 12:10:40 PM
I know this is probably the last argument most people want to hear, but is this not a case where more independent implementations would result in less risk?
No. This is nonsense that has been pushed by those actively trying to co-opt the network (or propagated by trolls such as franky). Sure, it would be beneficial to have some competition on e.g. the p2p code but that's about it. More implementations as a side-effects can (and undoubtedly will) lead to even more problems, which will most certainly be harder to solve once multiple node implementations of the network start disagreeing due to whatever reason (be it a bug in this case). <- this is given that you completely ignore that any attempt at a secondary implementation so far has been amateurish at best.

Multiple implementations *increase* risk.
^

Someone on IRC was pointing out the rather disappointing number of bitcoin sites that were currently managing to expose the bitcoind rpc to the public internet.  :(
Sadly, yes. Project idea: a open-source, complete web implementation (frontend and whatnot).

Back on topic, I think there's two sets of Core users: those who run their node and rarely update it, and the more enthusiastic ones who keep up with upgrades. It might make sense to have a LTS version with more thoroughly tested and vetted consensus critical code (that's proven itself), and a regular version. I think more choice and flexibility could be useful here.
LTS version adoption would make it significantly harder to do a fork bugfix/upgrade whenever it gets needed though.


Title: Re: The duplicate input vulnerability shouldn't be forgotten
Post by: DooMAD on September 24, 2018, 12:40:55 PM
I know this is probably the last argument most people want to hear, but is this not a case where more independent implementations would result in less risk?
No. This is nonsense that has been pushed by those actively trying to co-opt the network  

Slightly OT, but can a consensus based system ever really be co-opted, though?  Everyone seems to have their own slightly different ideas of what makes Bitcoin what it is.  If you find yourself on a minority fork, it's because you aren't following consensus.  It's the numbers that matter, not what any one person believes Bitcoin "should" be.  Even though we might disagree with them, many BCH users will argue that Bitcoin has already been co-opted, but the simple fact remains they don't have the numbers behind them to do anything with that assertion.  So they have to settle for being an altcoin.  That's just how it is.  A day may come where you find yourself on the wrong side of consensus.  If that day comes, you would then find yourself deciding whether it's more important to stick with what you think it should be, or to accept it for what it is.  Maybe that's all getting a bit philosophical, though.

Back to the main purpose of the thread, though.  Yes, there are definitely some issues with multiple implementations if it's done in the wrong way.  It seems there's no simple answer to this one.  Aside from the things gmaxwell and achow101 mentioned, I suspect one of the primary flaws with multiple implementations is that much of the code would simply be copied from other implementations anyway.  It wouldn't necessarily ensure catching any present faults, even if people were taking the effort to run two different clients to compare results.  If they've inadvertently duplicated the bug, it won't make any difference.  Much like how any of the altcoins that may have been affected didn't spot duplicate inputs either.


Title: Re: The duplicate input vulnerability shouldn't be forgotten
Post by: piotr_n on September 24, 2018, 01:41:50 PM
It seems to me that what we're observing here is central planers trying to defend their monopoly.
And need I to remind you that the very reason for Bitcoin to exist, being such a phenomenal success, is breaking a monopoly of central planners? :)

I'm not even trying to convince anyone that this monopoly is a bad thing, simply because I think that it isn't going to matter. If Bitcoin is here to stay, it is just a matter of time before the market players build alternative implementations, customized for their own needs. And the fact that central planners have been stating for quite long that they "don't care about miners" is most likely only going to accelerate the process.

I know for a fact that making a new implementation is not as hard as the legends say. With a proper team it can be done for a reasonable amount of money and the money invested is meant to pay back later.

Let's be honest, the current bitcoin core software/implementation is a direct inheritance of the prototype made by Satoshi. Some components were upgraded or replaced, but the general architecture has not changed since its inception. It's hardly the best architecture for any possible application, maybe even not the best architecture for any specific app.

So if Bitcoin is here to stay, new implementations coming into existence are inevitable. Not only because there are better ways to do what bitcoin core does, but also because there is (will be) too much money at sake and the stakeholders will not be willing to risk their money by relying on only one software implementation and the responsiveness of one team of people who don't even work for them.

And no matter how much you'd want to, you can't stop anyone from running a compatible yet alternative implementation of a bitcoin node.


Title: Re: The duplicate input vulnerability shouldn't be forgotten
Post by: Lauda on September 24, 2018, 02:20:37 PM
It seems to me that what we're observing here is central planers trying to defend their monopoly.
If you believe that there are central planers and a monopoly, then you don't understand Bitcoin.

And the fact that central planners have been stating for quite long that they "don't care about miners" is most likely only going to accelerate the process.
Miners are often idiots and don't decide anything (they shouldn't anyways).

So if Bitcoin is here to stay, new implementations coming into existence are inevitable.
Many have tried.
this is given that you completely ignore that any attempt at a secondary implementation so far has been amateurish at best.

And no matter how much you'd want to, you can't stop anyone from running a compatible yet alternative implementation of a bitcoin node.
If you want to, then you're free to swim in second-grade garbage. :)


Title: Re: The duplicate input vulnerability shouldn't be forgotten
Post by: piotr_n on September 24, 2018, 02:26:56 PM
It seems to me that what we're observing here is central planers trying to defend their monopoly.
If you believe that there are central planers and a monopoly, then you don't understand Bitcoin.

And the fact that central planners have been stating for quite long that they "don't care about miners" is most likely only going to accelerate the process.
Miners are often idiots and don't decide anything (they shouldn't anyways).

So if Bitcoin is here to stay, new implementations coming into existence are inevitable.
Many have tried.
this is given that you completely ignore that any attempt at a secondary implementation so far has been amateurish at best.

And no matter how much you'd want to, you can't stop anyone from running a compatible yet alternative implementation of a bitcoin node.
If you want to, then you're free to swim in second-grade garbage. :)

No, sir. Your jealous comments are amateurish at best.

My software might not have been tested as much as satoshi's code base, but its proven to be working very well, has an excellent performance and it's very easy to work with because of its brilliant architecture.
Plus, most of all, it would not have accepted a block with a transaction that spends the same input twice, nor crash upon it. Which is what all this thread is about.


Title: Re: The duplicate input vulnerability shouldn't be forgotten
Post by: Lauda on September 24, 2018, 02:36:40 PM
No, sir. Your jealous comments are amateurish at best.

My software might not have been tested as much as satoshi's code base, but its proven to be working very well, has an excellent performance and it's very easy to work with because of its brilliant architecture.
Who exactly was talking about your software? Classic deflection.

Plus, most of all, it would not have accepted a block with a transaction that spends the same input twice, nor crash upon it. Which is what all this thread is about.
Lucky.


Title: Re: The duplicate input vulnerability shouldn't be forgotten
Post by: piotr_n on September 24, 2018, 02:38:21 PM
No, sir. Your jealous comments are amateurish at best.

My software might not have been tested as much as satoshi's code base, but its proven to be working very well, has an excellent performance and it's very easy to work with because of its brilliant architecture.
Plus, most of all, it would not have accepted a block with a transaction that spends the same input twice, nor crash upon it. Which is what all this thread is about.
Who exactly was talking about your software? Classic deflection.
You were:
Quote
any attempt at a secondary implementation so far has been amateurish at best.


Title: Re: The duplicate input vulnerability shouldn't be forgotten
Post by: Lauda on September 24, 2018, 02:42:28 PM
No, sir. Your jealous comments are amateurish at best.

My software might not have been tested as much as satoshi's code base, but its proven to be working very well, has an excellent performance and it's very easy to work with because of its brilliant architecture.
Plus, most of all, it would not have accepted a block with a transaction that spends the same input twice, nor crash upon it. Which is what all this thread is about.
Who exactly was talking about your software? Classic deflection.
You were:
Quote
any attempt at a secondary implementation so far has been amateurish at best.
No; stop using this thread as a means to promote an implementation that has 0 active reviewers (and probably 0 users; excl. the creator).


Title: Re: The duplicate input vulnerability shouldn't be forgotten
Post by: piotr_n on September 24, 2018, 02:49:18 PM
No; stop using this thread as a means to promote the implementation that has 0 active reviewers and probably 0 users.
You stop judging implementations that you haven't even made an effort to see.

Number of reviewers doesn't mean shit.
We've just seen that all it takes is one celebrity saying "it's safe" and none of the dozens of reviewers is even going to question that.
Behavior of the crowd is a bitch. That's why I prefer to work alone.


Title: Re: The duplicate input vulnerability shouldn't be forgotten
Post by: Lauda on September 24, 2018, 02:57:51 PM
You stop judging implementations that you haven't even made an effort to see.
I don't need to see it; you've already described it as a one-person (garbage) project.

Number of reviewers doesn't mean shit.
When it comes to 0 reviewers vs. decent number of reviewers, that's objectively false.

That's why I prefer to work alone.
That's obviously better for everyone. :)


Title: Re: The duplicate input vulnerability shouldn't be forgotten
Post by: spartacusrex on September 24, 2018, 03:20:27 PM
Base Protocol needs to be set in stone.

Very hard to write multiple implementations of a moving target.


Title: Re: The duplicate input vulnerability shouldn't be forgotten
Post by: piotr_n on September 24, 2018, 03:30:43 PM
You stop judging implementations that you haven't even made an effort to see.
I don't need to see it; you've already described it as a one-person (garbage) project.
Then you know very little about software development, my friend.
In my life I've made many one-person projects and those who paid me for them were never disappointed.
And unlike you, they weren't idiots.

Number of reviewers doesn't mean shit.
When it comes to 0 reviewers vs. decent number of reviewers, that's objectively false.
Obviously not always, as the event we're talking about clearly proves.

Code created entirely by one person has an advantage of that person understanding it better.
Which generally lowers a chance of that person making a mistake while changing it.


Title: Re: The duplicate input vulnerability shouldn't be forgotten
Post by: aliashraf on September 24, 2018, 04:02:03 PM
Obviously, it would be much safer for a community to take care of one implementation with fewer lines of codes.
I don't think it is necessarily best to rely on the "community" to ensure that each implementation of a bitcoin node is secure/safe to use.

Members of the community might have, at most a few million dollars worth of bitcoin of their own money at stake, but even if they make a mistake, they are unlikely to personally lose any money. On the other hand, there are several bitcoin related businesses that have billions of dollars worth of customer money, and hundreds of millions (and in some cases billions) of dollars of equity who have serious incentives to ensure these types of bugs don't pop up with software in production, and they have incentives to have fail-safes in place to prevent any actual losses if/when these types of bugs make it through the cracks.

I would point out that I am not aware of any major exchange "pausing" deposits and/or withdrawals immidiately after this bug was discovered, however anyone running the relevant software would have taken some time to stop deposits/withdrawals to upgrade their nodes (which would include reviewing the code). This leads me to believe that the majority of exchanges/businesses are running their own custom node software, maybe not exclusively, but this is at least part of what they are running.  


I thought we are done with multiple implementations proposal and it was why I brought up my software bloat diagnosis for the issue. Now I see we are recycling the same idea again and again and it is really disappointing and makes me paranoid.

Why? Why should one continue pushing for a false idea like this? How is it possible to have 10 million lines of code safer than 100K?

I suspect most of the people who are insisting on this idea, multiple implementations, are just taking advantage of an incident to retaliate against Core team. I'm sure about you not being such a person but I afraid you don't completely get how irrelevant and dangerous is this proposal.

Satoshi's original software was just like 3K lines of code, now we have bitcoin core with 100K lines and every software engineer knows what does it mean and how inevitable is having bug issues. Actually it is very impressive that an unexploited bug is the worst incident of its kind after like 10 years and Core guys deserve a lot of kudos for the job they have accomplished till now.

Now instead of a decent technical discussion about how and why this bloat happened and what measures should be taken to manage the risks involved, we are watching biased actors who are seeking more power in the community or suppose a weakened developer community is better for them are suggesting the oddest solution ever: using even more lines of code! I made a prophecy regarding this situation:
I understand there are many people with various incentives mostly not in the best interests of bitcoin who may find such an event a good opportunity to take advantage in an irresponsible and unproductive way. Although I have a reputation on this forum to be somewhat discontented with Core devs, I hope my statements here other than being helpful in the context of this topic, would show how committed I'm to bitcoin as a liberating movement instead of blindly opposing and fighting with specific parties.

The most sophisticated reasoning behind multiple implementation idea could be something like this:
One thing I've always wanted to do -- but have never had the energy for -- was to run multiple implementations of bitcoin (e.g. btcd and bitcoin core) and only transact while they are in agreeance.

For most bitcoin businesses, a few hours of not processing deposits/withdrawals is actually not a big deal, and happens pretty regularly anyway (for no fault of bitcoin itself). While on the other hand transacting after an accidental chain-split could really be devastating.
(which you, @quickseller, have merited by the way)
But it is not a matured idea as I have argued against it  before:
Firstly, it implies a new kind of consensus system, with no theoretical support.
Bitcoin consensus system is based on game theory (rational behavior of players with divergent incentives) it is NOT about fault tolerance regarding software bugs. Putting such a layer on top of bitcoin is not a trivial job because we are not talking about low level control systems and alike.

From a software engineering point of view, once we are concerned about bugs in a system, software bloat is the most important problem that we should take care of. As I have discussed in my previous post in this topic  (https://bitcointalk.org/index.php?topic=5035144.msg46091598#msg46091598), for bitcoin, this bloat is not a mistake of an architect or a programmer it is a direct consequence of the very fact that bitcoin is a new class of system and is not (and should not be) governed traditionally, it escalates a case-by-case development process and institutional conservatism in the community according to which devs define their role as guardians.

Bitcoin has grown from 3,000 lines of code to 100,000 all the way with soft forks, i.e. downward compatibility (somehow). It is the source of the bloat
and the critical vulnerability to bugs, this situation should be reconsidered radically if there is a real motivation to do anything other than taking foolish temporary political advantages of a serious threat that should be addressed asap and before everything is lost.



Title: Re: The duplicate input vulnerability shouldn't be forgotten
Post by: danda on September 24, 2018, 04:10:21 PM
As I see it:

  • The consensus layer protocol needs to be formally specified and versioned.  Bugs and all.  The spec should be updated before consensus code
  • consensus layer code should be changed as rarely as possible.  if ever.

I would be very happy to see a software fork that makes a promise to its users that consensus layer code will not be changed except for critical bug fixes.   I think this will have a couple of important effects:

1. Foster growth of the bitcoin sub-community whose priority is to resist changes to the protocol.  Those who value stability, predictability, immutability in an asset/currency over shiny new things.
2. Create a technical means for this community to have its voice heard.  Possibly blocking future soft forks from occurring, or at least keeping the "old ways" alive.

my still-running pre segwit node is not affected by this bug.   just sayin.


Title: Re: The duplicate input vulnerability shouldn't be forgotten
Post by: DooMAD on September 24, 2018, 06:08:42 PM
I suspect most of the people who are insisting on this idea, multiple implementations, are just taking advantage of an incident to retaliate against Core team. I'm sure about you not being such a person but I afraid you don't completely get how irrelevant and dangerous is this proposal.

Satoshi's original software was just like 3K lines of code, now we have bitcoin core with 100K lines and every software engineer knows what does it mean and how inevitable is having bug issues. Actually it is very impressive that an unexploited bug is the worst incident of its kind after like 10 years and Core guys deserve a lot of kudos for the job they have accomplished till now.

Now instead of a decent technical discussion about how and why this bloat happened and what measures should be taken to manage the risks involved, we are watching biased actors <snip>

Putting aside the absurdity of someone who argues against any and all off chain solutions and believes every future development can be nowhere other than on layer 0 having the gall to use the word "bloat" in a sentence without a hint of irony...

How about, instead of gross generalisations and dismissing thousands of lines of code as "bloat", you name the specific parts of the code that you believe aren't needed?  This will greatly expedite the moment someone can tell you why you're wrong and we can all move on.

Fair enough if we're ruling out alternative implementations due to security concerns.  It was only a suggestion based on the conventional wisdom of "not putting all your eggs in one basket".  But, considering that you are someone who has launched multiple lengthy tirades against the current direction this project is moving in as of late, if you're using this as another opportunity to take cheap shots at LN, you can add yourself to the list of biased actors who are taking advantage of this incident.


Title: Re: The duplicate input vulnerability shouldn't be forgotten
Post by: aliashraf on September 24, 2018, 09:13:49 PM
I suspect most of the people who are insisting on this idea, multiple implementations, are just taking advantage of an incident to retaliate against Core team. I'm sure about you not being such a person but I afraid you don't completely get how irrelevant and dangerous is this proposal.

Satoshi's original software was just like 3K lines of code, now we have bitcoin core with 100K lines and every software engineer knows what does it mean and how inevitable is having bug issues. Actually it is very impressive that an unexploited bug is the worst incident of its kind after like 10 years and Core guys deserve a lot of kudos for the job they have accomplished till now.

Now instead of a decent technical discussion about how and why this bloat happened and what measures should be taken to manage the risks involved, we are watching biased actors <snip>

Putting aside the absurdity of someone who argues against any and all off chain solutions and believes every future development can be nowhere other than on layer 0 having the gall to use the word "bloat" in a sentence without a hint of irony...
More ironic would be the attitude of an "anti-FUD" troll who suddenly suggests using alternative implementations because of his common sense about "not putting all the eggs in one basket", I suppose.  

Quote
How about, instead of gross generalisations and dismissing thousands of lines of code as "bloat", you name the specific parts of the code that you believe aren't needed?  This will greatly expedite the moment someone can tell you why you're wrong and we can all move on.
Have you ever tried coding? It is not how it works. you can not pick Windows source code and put finger on this or that module to blame. It is a technical term and one should have a basic software engineering knowledge and experience to catch up.

Based on your above argument, I suppose you are not 100% qualified to judge my assessment. It would be Greg Maxwell's job to denounced bitcoin code as being in a bloated state and then my job to convince him about it. For now, 7500% growth in the code volume is enough evidence to "move on" with my analysis.

FYI: My opposition to LN and off-chain scaling solutions has nothing to do with software bloat. I think it is absolutely possible to have a smart, clean and elegant software that implements a robust and solid protocol capable of satisfying all the necessary conditions for a decentralized, permissionless and secure "p2p electronic cash system" and is absolutely capable of performing and scaling good enough to be gradually adopted by people as an alternative monetary system without compromising any of these features or projecting any part of its job to non-standard semi-centralized second layer solutions.
It is how I show my loyalty to "the cause". Unlike people like you, I haven't give up with bitcoin and crypto.

Quote
Fair enough if we're ruling out alternative implementations due to security concerns.  It was only a suggestion based on the conventional wisdom of "not putting all your eggs in one basket".  

Ok, I get it. You said something silly and now you are sorry.
Apology accepted but you should stop terrorizing my personality too.
Just keep reading and try learning instead of talking nonsense about LN in a decent highly technical topic about a critical turning point in bitcoin history.


Title: Re: The duplicate input vulnerability shouldn't be forgotten
Post by: Lauda on September 24, 2018, 09:54:12 PM
It was only a suggestion based on the conventional wisdom of "not putting all your eggs in one basket".  
Maybe you should avoid trying to apply "investment wisdom" on software engineering, especially security-related engineering. Just a thought.

Have you ever tried coding? It is not how it works. you can not pick Windows source code and put finger on this or that module to blame. It is a technical term and one should have a basic software engineering knowledge and experience to catch up.
-snip-
For now, 7500% growth in the code volume is enough evidence to "move on" with my analysis.
This says everything. I sense a strong IT background. :)


Title: Re: The duplicate input vulnerability shouldn't be forgotten
Post by: Peter R on September 24, 2018, 11:01:15 PM
As I see it:

  • The consensus layer protocol needs to be formally specified and versioned.  Bugs and all.  The spec should be updated before consensus code
  • consensus layer code should be changed as rarely as possible.  if ever.



A formal spec would be nice, I agree.  But I think it is interesting that Core's inflation bug wasn't due to nuance in the consensus rules.  Having a formal spec wouldn't have helped in this case.  The bug literally allowed coins to be created out of thin air.  Something that _obviously_ was not supposed to happen.  You might call it the most important consensus rule in Bitcoin!

As to your second point about the consensus layer not changing much, I think this is tricky in practice too. For example, a hot area of research in blockchain scaling is how to parallelize block validation.  This type of work will almost certain involve refactoring consensus critical code.  If we want to eventually processes thousands, or hundreds of thousands, of transactions per second, we will need massive parallelization.


Title: Re: The duplicate input vulnerability shouldn't be forgotten
Post by: DooMAD on September 24, 2018, 11:10:06 PM
Fair enough if we're ruling out alternative implementations due to security concerns.  It was only a suggestion based on the conventional wisdom of "not putting all your eggs in one basket".  

Ok, I get it. You said something silly and now you are sorry.
Apology accepted but you should stop terrorizing my personality too.
Just keep reading and try learning instead of talking nonsense about LN in a decent highly technical topic about a critical turning point in bitcoin history.

Poor analysis as usual.  There was an exchange of ideas and my idea has now been established to be objectionable.  So instead of continuing to plow on regardless of how many people tell me it's a bad idea, I'm accepting that decision and not continuing to pursue it.  That's something you could learn, but I suspect you won't.  

Also, I sincerely doubt this is a "turning point" on the scale you have in mind.  There will be some proposals to introduce a little more vigilance, but it sounds like you're expecting some sort of total rebuild from the ground up.  Your track record of contributing to technical topics usually consists of telling people that everything they're working on needs to go in the trash because you supposedly know best.


1. Foster growth of the bitcoin sub-community whose priority is to resist changes to the protocol.  Those who value stability, predictability, immutability in an asset/currency over shiny new things.
2. Create a technical means for this community to have its voice heard.  Possibly blocking future soft forks from occurring, or at least keeping the "old ways" alive.

Hang on, what?  You want to have a veto in what many see as a permissionless system?  No thanks.  Also, you already have a technical means to do that.  If you want to block future forks to preserve the "old ways", you're going to have to start with a fork of your own.  Keep it frozen in time forever if you like.  It stands to reason that you won't have many developers on hand when you do need to change something, though.





Title: Re: The duplicate input vulnerability shouldn't be forgotten
Post by: harding on September 24, 2018, 11:16:09 PM
I'm disappointed that almost everyone posting in this thread is making suggestions about what other people should do rather than thinking of ways to contribute themselves.

I plan to open at least one PR a month adding a useful new test to Bitcoin Core, or significantly improving an existing test, with the first PR to be opened by 31 October 2018.

What do the rest of you plan to do?


Title: Re: The duplicate input vulnerability shouldn't be forgotten
Post by: Peter R on September 24, 2018, 11:26:52 PM
I'm disappointed that almost everyone posting in this thread is making suggestions about what other people should do rather than thinking of ways to contribute themselves.

I plan to open at least one PR a month adding a useful new test to Bitcoin Core, or significantly improving an existing test, with the first PR to be opened by 31 October 2018.

What do the rest of you plan to do?

I plan to continue working on a competing implementation to Bitcoin Core. It was because of Bitcoin Unlimited that this bug was caught, when Awemany noticed it while working on the consensus changes for the November fork in BCH.  This tells me that multiple implementations and competing development teams is a good thing.


Title: Re: The duplicate input vulnerability shouldn't be forgotten
Post by: JayJuanGee on September 24, 2018, 11:32:49 PM
I'm disappointed that almost everyone posting in this thread is making suggestions about what other people should do rather than thinking of ways to contribute themselves.

I plan to open at least one PR a month adding a useful new test to Bitcoin Core, or significantly improving an existing test, with the first PR to be opened by 31 October 2018.

What do the rest of you plan to do?

newbie needs to have a track record before dictating what others could or should do, no?  Let's see if you do what you said you were going to do.  How many months are you going to report back to show that you have taken actual action and are more than just promises?


Title: Re: The duplicate input vulnerability shouldn't be forgotten
Post by: Peter R on September 24, 2018, 11:51:50 PM
I'm disappointed that almost everyone posting in this thread is making suggestions about what other people should do rather than thinking of ways to contribute themselves.

I plan to open at least one PR a month adding a useful new test to Bitcoin Core, or significantly improving an existing test, with the first PR to be opened by 31 October 2018.

What do the rest of you plan to do?

newbie needs to have a track record before dictating what others could or should do, no?  Let's see if you do what you said you were going to do.  How many months are you going to report back to show that you have taken actual action and are more than just promises?


I'm pretty sure @harding is David Harding.  Definitely not a Bitcoin newbie.

https://github.com/harding


Title: Re: The duplicate input vulnerability shouldn't be forgotten
Post by: bones261 on September 25, 2018, 01:17:36 AM
I plan to continue working on a competing implementation to Bitcoin Core. It was because of Bitcoin Unlimited that this bug was caught, when Awemany noticed it while working on the consensus changes for the November fork in BCH.  This tells me that multiple implementations and competing development teams is a good thing.

I'm just disappointed that awemany hasn't received more tips. I personally tipped him .01 BCH. He hasn't even gathered 39 BCH, yet, last time that I checked. I would think the BTC and BCH community would be more grateful and giving. (As well as LTC, BTG etc. etc. communities.)



Title: Re: The duplicate input vulnerability shouldn't be forgotten
Post by: Lauda on September 25, 2018, 05:51:29 AM
I plan to continue working on a competing implementation to Bitcoin Core. It was because of Bitcoin Unlimited that this bug was caught, when Awemany noticed it while working on the consensus changes for the November fork in BCH.  This tells me that multiple implementations and competing development teams is a good thing.
I'm just disappointed that awemany hasn't received more tips. I personally tipped him .01 BCH. He hasn't even gathered 39 BCH, yet, last time that I checked. I would think the BTC and BCH community would be more grateful and giving. (As well as LTC, BTG etc. etc. communities.)
Given the absolutely shameful disaster of a post (https://medium.com/@awemany/600-microseconds-b70f87b0b2a6) that he wrote on medium, he deserves nothing IMO.


Title: Re: The duplicate input vulnerability shouldn't be forgotten
Post by: Ayms on September 25, 2018, 09:42:30 AM
I plan to continue working on a competing implementation to Bitcoin Core. It was because of Bitcoin Unlimited that this bug was caught, when Awemany noticed it while working on the consensus changes for the November fork in BCH.  This tells me that multiple implementations and competing development teams is a good thing.
I'm just disappointed that awemany hasn't received more tips. I personally tipped him .01 BCH. He hasn't even gathered 39 BCH, yet, last time that I checked. I would think the BTC and BCH community would be more grateful and giving. (As well as LTC, BTG etc. etc. communities.)
Given the absolutely shameful disaster of a post (https://medium.com/@awemany/600-microseconds-b70f87b0b2a6) that he wrote on medium, he deserves nothing IMO.

My opinion about all this story is that unfortunately it gives a strong feeling of not encouraging people to report bugs, @awemany is maybe thinking that he should have better sold the exploit to some dubious parties that could have used it

I don't think it's very important to know the total truth, he reported the critical bug and should have desserved a much more important reward (but he could have admitted that beardnboobies, while funny, is a kind of arrogant also), maybe he did not know to what extent it was critical but then his action prevented others from discovering it and using it

What solution among those discussed here is the best for the future? I don't know

But for sure that's always the same story, all decentralized systems failed (except bittorrent) because of the lack of incentive for people to participate (run nodes, participate to code, review, etc), fortunately bitcoin is not a decentralized system today so such situation can be controlled but it should/will be in the future, then what will be this incentive for people (lightning?)?

Simple example: I realized after the vulnerability disclosure that I was running a deviant node version 0.17.99 since I installed it from master (with a lot of difficulties), then I had to patch manually the vulnerability, I should revert to 0.16.3 and have a clean node, now why am I going to spend more time on this?








Title: Re: The duplicate input vulnerability shouldn't be forgotten
Post by: dragonvslinux on September 25, 2018, 10:45:27 AM
Well written, it's good to see a honest and constructive approach to the problem. I fully agree an LTS branch should now be implemented, starting with 16.03, this is long overdue since the overflow bug of 2010. While it'd be nice for bitcoin "companies" (companies profiting from Bitcoin transactions) to contribute to core testing, it seems like they may only do so if they have to. Maybe now they will consider it? But in my opinion it'd be more likely they would want to throw money at the problem, rather than get their hands dirty.
All in all it was a good catch, the patch was rolled out very effectively and damage was limited to $0. It's good to keep sight of this, in one sense, this is a definite victory.
To me this is merely a learning curve, we never should trust any technology 100%, there will always >1% chance of an exploit, the question is whether it can be identified through rigorous testing, or whether a malicious actor will discover it first and therefore exploit it. This is the logic we need to work on, remembering that no code is perfect.
What concerns me now is knowing that there may now be more malicious actors studying the code for any future exploits, as opposed to Core testing.
 


Title: Re: The duplicate input vulnerability shouldn't be forgotten
Post by: Lauda on September 25, 2018, 10:51:44 AM
While it'd be nice for bitcoin "companies" (companies profiting from Bitcoin transactions) to contribute to core testing, it seems like they may only do so if they have to. Maybe now they will consider it? But in my opinion it'd be more likely they would want to throw money at the problem, rather than get their hands dirty.
It's utterly disgusting how greedy some of the leading companies in this space are, and just how fraudulently their leaders tend to represent themselves with statements of support/belief in Bitcoin or similar. AFAIK, blockchain.info has hired 1 employee (although I can't recall which whop) and there was a Xapo listing for a Bitcoin Core developer some time ago (I'm not sure what happened with that). Other than those two, I haven't seen other examples of companies doing this.


Title: Re: The duplicate input vulnerability shouldn't be forgotten
Post by: Samarkand on September 25, 2018, 11:23:32 AM
...
Other than those two, I haven't seen other examples of companies doing this.

Blockstream also has hired quite a few developers that contribute to
Bitcoin Core and don´t work on their sidechain products.
Allegedly they also receive time-locked Bitcoins to ensure that it is in
their interest to maintain Bitcoin (a pretty good idea if you ask me, more
companies should do this).

...
6- Because of the last two factors, bitcoin now suffers from a software engineering problem: software bloat. Through years, incremental developments
plus efforts for keeping system downward compatible and insisting on soft forks against hard forks has ended to a situation in which bitcoin code is becoming
more and more complicated and hard to understand/contribute amd maintain.
...

Isn´t this true for any larger software engineering project?


Title: Re: The duplicate input vulnerability shouldn't be forgotten
Post by: chek2fire on September 25, 2018, 11:24:09 AM
I plan to continue working on a competing implementation to Bitcoin Core. It was because of Bitcoin Unlimited that this bug was caught, when Awemany noticed it while working on the consensus changes for the November fork in BCH.  This tells me that multiple implementations and competing development teams is a good thing.

I'm just disappointed that awemany hasn't received more tips. I personally tipped him .01 BCH. He hasn't even gathered 39 BCH, yet, last time that I checked. I would think the BTC and BCH community would be more grateful and giving. (As well as LTC, BTG etc. etc. communities.)



he deserve this guy nothing because he use it as a political propaganda tool against bitcoin. And is not very sure that is the same guy that found this bug as his claim have some timeprof mistakes.


Title: Re: The duplicate input vulnerability shouldn't be forgotten
Post by: harding on September 25, 2018, 11:28:28 AM
AFAIK, blockchain.info has hired 1 employee (although I can't recall which whop) and there was a Xapo listing for a Bitcoin Core developer some time ago (I'm not sure what happened with that). Other than those two, I haven't seen other examples of companies doing this.

Last I heard, Sjors Provoost works for Blockchain.info, Anthony (AJ) Towns works for Xapo, and Jim Posen works for Coinbase. Blockstream employees Pieter Wuille, Jorge Timon, Gregory Sanders, and several other contributors (plus two C-Lightning devs)  Several companies also help support the Media Lab's Digital Currency Initiative (DCI) that employees Wladimir van der Laan and Cory Fields (as well as several other open source Bitcoin contributors who don't normally focus on Bitcoin Core).

The other major source of employment for Bitcoin Core work is ChainCode Labs.

(I could be forgetting some companies; if so, sorry.)


Title: Re: The duplicate input vulnerability shouldn't be forgotten
Post by: chek2fire on September 25, 2018, 11:29:31 AM
Well written, it's good to see a honest and constructive approach to the problem. I fully agree an LTS branch should now be implemented, starting with 16.03, this is long overdue since the overflow bug of 2010. While it'd be nice for bitcoin "companies" (companies profiting from Bitcoin transactions) to contribute to core testing, it seems like they may only do so if they have to. Maybe now they will consider it? But in my opinion it'd be more likely they would want to throw money at the problem, rather than get their hands dirty.
All in all it was a good catch, the patch was rolled out very effectively and damage was limited to $0. It's good to keep sight of this, in one sense, this is a definite victory.
To me this is merely a learning curve, we never should trust any technology 100%, there will always >1% chance of an exploit, the question is whether it can be identified through rigorous testing, or whether a malicious actor will discover it first and therefore exploit it. This is the logic we need to work on, remembering that no code is perfect.
What concerns me now is knowing that there may now be more malicious actors studying the code for any future exploits, as opposed to Core testing.
 


In open source world we have always seen that startups or companies that use the open source programme always to have a full time payment developer to contribute to code. Great example for this is Linux.
Bitcoin is an open source paradox programme imo because the most of the companies that get great profits from it have never feel the needing to contribute to code, an example for this is Coinbase.
Many of them also want to replace it with something that full control it and act as someone that like to destroy it.
Great example to this is Bitmain, Bitpay, Blockchain info.


Title: Re: The duplicate input vulnerability shouldn't be forgotten
Post by: aliashraf on September 25, 2018, 12:24:04 PM
Don't want to disturb you guys, but this train looks to be pretty much derailed. I know it is always a good idea to ask companies to contribute more, but is this all we got? Isn't it somehow underreacting to double input bug story?

Also, I think it is not helpful either to say "don't tell others what to do, tell us what are you going to do for your country blockchain". We need to discuss technical aspects of the issue.


Title: Re: The duplicate input vulnerability shouldn't be forgotten
Post by: buzztiaan on September 25, 2018, 05:46:30 PM
Maybe it's time to promote more Bitcoin Knots and other node software that is not Core? Seems to me like having multiple implementations that are not forks of another another will provide a resilience against bugs.

* http://bcoin.io/
* https://libbitcoin.dyne.org/
* https://github.com/btcsuite/btcd/
* https://bitcore.io/

i'm sure you can find more on your own


Title: Re: The duplicate input vulnerability shouldn't be forgotten
Post by: Lauda on September 25, 2018, 06:42:28 PM
AFAIK, blockchain.info has hired 1 employee (although I can't recall which whop) and there was a Xapo listing for a Bitcoin Core developer some time ago (I'm not sure what happened with that). Other than those two, I haven't seen other examples of companies doing this.

Last I heard, Sjors Provoost works for Blockchain.info, Anthony (AJ) Towns works for Xapo, and Jim Posen works for Coinbase. Blockstream employees Pieter Wuille, Jorge Timon, Gregory Sanders, and several other contributors (plus two C-Lightning devs)  Several companies also help support the Media Lab's Digital Currency Initiative (DCI) that employees Wladimir van der Laan and Cory Fields (as well as several other open source Bitcoin contributors who don't normally focus on Bitcoin Core).

The other major source of employment for Bitcoin Core work is ChainCode Labs.

(I could be forgetting some companies; if so, sorry.)
I thought it was Sjors, but I didn't want to spread potentially false information as I wasn't completely sure. Well yes, Blockstream is implied for everyone who's been around for a while (hence why I didn't mention it). I guess it wouldn't be bad to keep a list like this somewhere; maybe the community could create some pressure in order to get other big companies to at least hire 1 person to work on the reference implementation. This also helps with *development decentralization* (me recalls the bcash nonsense 'Blockstream = most commits' or w/e).

* https://bitcore.io/
"Bitcore™ © BitPay, Inc. Bitcore is released under the MIT license." Stabbing my eyeball with a fork would be more pleasant than using an implementation made by a malicious actor. ::)


Title: Re: The duplicate input vulnerability shouldn't be forgotten
Post by: pooya87 on September 26, 2018, 03:28:44 AM
* https://libbitcoin.dyne.org/
the website doesn't contain any link to source code apart from a broken link called "git repository" which leads nowhere. using google i found https://github.com/libbitcoin/libbitcoin which seems to be a library not an application (full node implementation) and it is in C++ so i wouldn't be surprised if most of it was copy of bitcoin core code.

Quote
* https://bitcore.io/
last commit belongs to Nov 23, 2017! it doesn't seem that active either.


Title: Re: The duplicate input vulnerability shouldn't be forgotten
Post by: byteball on September 26, 2018, 04:23:08 PM
...
Firstly, it implies a new kind of consensus system, with no theoretical support. I'm not aware of any serious work covering a decentralized consensus based system that uses heterogeneity of implementations as a main or auxiliary security measure e.g. for immunity against software bugs.

What if there were special research labs running a few implementations each, e.g. GoLang, Java, Python, pure C (is there one?) with a human deciding which chain is correct? When nodes disagree with each other, they send alert to an operator on duty, operator wakes up, fires up his/her laptop, investigates and warns the community about potential split.

I have no idea how such service could be monetized or rewarded. They can get fame, using that fame for their own projects, or they can get donations from whales or from big Bitcoin businesses.


Title: Re: The duplicate input vulnerability shouldn't be forgotten
Post by: amishmanish on September 26, 2018, 05:45:03 PM
Awesome discussion. My head is swirling from all the information so I'll do my part to ensure that, as the topic suggests, the duplicate input vulnerability is never forgot!! 8)

The Seventeenth of September

Remember, remember!
The Seventeenth of September,
The DoS vul'bility and Inflation fault;
I know of no Reason
Why the 0.15 to 0.16.2 season
Should ever be forgot!
 
 ::) ::)

Inspired by the inimitable Folk verse (http://www.potw.org/archive/potw405.html). Thank You. Now I shall go back to wondering when, if ever, I'll be able wrap my head around bitcoin!  :D

A seemingly presumptuous, non-technical observation: Cobra should realize that Greg seems to have enough respect for him and he shouldn't construe his zen'd out insights as insulting. Nobody is insulting anyone. Lets just stand together. Cheers to everyone!


Title: Re: The duplicate input vulnerability shouldn't be forgotten
Post by: hobbes on September 27, 2018, 06:42:19 PM
Separate the networking part from wallet and GUI to reduce complexity.

Maybe the alert system could be modified to only warn the user with a predefined warning to go check the news because something is going on.


Title: Re: The duplicate input vulnerability shouldn't be forgotten
Post by: cellard on September 27, 2018, 08:04:25 PM
Separate the networking part from wallet and GUI to reduce complexity.

Maybe the alert system could be modified to only warn the user with a predefined warning to go check the news because something is going on.

Interesting.. a hardcoded generic message that says "go check the news" could be helpful, however, who has the keys? it should require the signature of several trusted developers to guarantee they aren't all compromised, at least 10 signatures for safety imo, with developers that live in different timezones.

And still an attacker with enough resources could buy enough media to fool the public and use the generic message for their agenda. See this in action:

https://www.youtube.com/watch?v=_fHfgU8oMSo

It's not clear to me if the alert system does more harm than good or not. Ideally we just want to avoid these bugs. Lowering complexity is always welcome in Bitcoin... it just needs to store keys safe and not screw up during transactions, the rest is an extra. Of course, easier said than done, but as far as I know some of the "super minimalist" clients weren't affected by this bug, so "bitcoin minimalists" scored another point.


Title: Re: The duplicate input vulnerability shouldn't be forgotten
Post by: DooMAD on September 27, 2018, 08:54:44 PM
Maybe the alert system could be modified to only warn the user with a predefined warning to go check the news because something is going on.

The alert system wasn't only disbanded due to concerns over who could send what message, but also because of a potential vulnerability involving DoS attacks on full nodes:

All of the issues described below allow an attacker in possession of the Alert Key to perform a Denial of Service attack on nodes that still support the Alert system. These issues involve the exhaustion of memory which causes node software to crash or be killed due to excessive memory usage.

I don't think they're in any hurry to bring it back in a slightly different guise.  There would be a certain irony if we inadvertently introduced new security risks while attempting to safeguard against potential future security risks.


Title: Re: The duplicate input vulnerability shouldn't be forgotten
Post by: theymos on September 27, 2018, 09:50:42 PM
Maybe the alert system could be modified to only warn the user with a predefined warning to go check the news because something is going on.

I suggested something like that previously (https://en.bitcoin.it/wiki/User:Theymos/Alert_codes).

I do think that some alert system would be good, though the old alert system's propagation method and especially its single-key authentication was really bad, so I don't mourn its loss specifically. A new system could work by polling DNS TXT records + signatures (eg. alert.bitcoincore.org.    TXT    "predefined_alert=2 time=... sig=ABCD+/012..."), with many domains+keys controlled by many people and perhaps a requirement that at least a few of them agree before displaying an alert.


Title: Re: The duplicate input vulnerability shouldn't be forgotten
Post by: aliashraf on September 28, 2018, 06:37:08 AM
Isn't it more about prevention rather than cure?

And for the cure, I don't get it how an alarming system would help? Actually it seems to me as a source of even further risks. I maintain my arguments up-thread regarding software bloat as the most distinguished source of bugs. I understand it is the hard way and needs a lot of efforts but once you are concerned about bugs, best practice is to take care of code volume.



Title: Re: The duplicate input vulnerability shouldn't be forgotten
Post by: Lauda on September 28, 2018, 06:41:58 AM
Isn't it more about prevention rather than cure?

And for the cure, I don't get it how an alarming system would help? Actually it seems to me as a source of even further risks. I maintain my arguments up-thread regarding software bloat as the most distinguished source of bugs. I understand it is the hard way and needs a lot of efforts but once you are concerned about bugs, best practice is to take care of code volume.
A complete separation of node and wallet code (i.e. the possibility of just building and running the node base) would help IMO. It does come with drawbacks though.


Title: Re: The duplicate input vulnerability shouldn't be forgotten
Post by: hobbes on September 28, 2018, 06:49:02 AM
(...)
I maintain my arguments up-thread regarding software bloat as the most distinguished source of bugs. I understand it is the hard way and needs a lot of efforts but once you are concerned about bugs, best practice is to take care of code volume.
A complete separation of node and wallet code (i.e. the possibility of just building and running the node base) would help IMO. It does come with drawbacks though.
At least for alternative implementations for monitoring it is the way to go.

Could you elaborate on the drawbacks?


Title: Re: The duplicate input vulnerability shouldn't be forgotten
Post by: Lauda on September 28, 2018, 06:53:23 AM
(...)
I maintain my arguments up-thread regarding software bloat as the most distinguished source of bugs. I understand it is the hard way and needs a lot of efforts but once you are concerned about bugs, best practice is to take care of code volume.
A complete separation of node and wallet code (i.e. the possibility of just building and running the node base) would help IMO. It does come with drawbacks though.
At least for alternative implementations for monitoring it is the way to go.

Could you elaborate on the drawbacks?
ryanofsky is working on process separation as seen here: https://github.com/bitcoin/bitcoin/pull/10973. Here are his slides regarding process separation: https://docs.google.com/presentation/d/1AeJ-7gD-dItUgs5yH-HoEzLvXaEWe_2ZiGUUxYIXcws/edit#slide=id.p. Look at page 6.
Reviewing the node code would be easier if it was completely separated from the GUI and wallet code (there would be noticeably a lot less lines of code to go through). I'm not sure if the end goal is complete separation though.


Title: Re: The duplicate input vulnerability shouldn't be forgotten
Post by: aliashraf on September 28, 2018, 07:57:35 AM
Isn't it more about prevention rather than cure?

And for the cure, I don't get it how an alarming system would help? Actually it seems to me as a source of even further risks. I maintain my arguments up-thread regarding software bloat as the most distinguished source of bugs. I understand it is the hard way and needs a lot of efforts but once you are concerned about bugs, best practice is to take care of code volume.
A complete separation of node and wallet code (i.e. the possibility of just building and running the node base) would help IMO. It does come with drawbacks though.
Good point to start from, more radical changes would be necessary tho.

I'm thinking of a complete rewrite by both employing loose coupling and revisioning in bootstrap-from-genesis  policy and relaxing down-to-big-bang compatibility requirements. Thanks for the links  by the way.



Title: Re: The duplicate input vulnerability shouldn't be forgotten
Post by: zheniasom on September 29, 2018, 07:02:10 PM
I agree with DooMad completely. Diversity is the only real solution to network security.

We have to deal with the fact that bugs are found in absolutely ALL software.
Only thing we can hope is that it is: A) rare that bugs are found in all software at the same time and B) that no single entity will likely have all of these bugs at that time. ;)