You must be talking about altcoin mining since there is virtually no choice when it comes to mining software with modern asic hardware.
|
|
|
Some of the nodes have come back online, presumably patched, but it's still under 400 on coin dance. I wonder how much time is needed to know how many have patched their clients and restarted them or have abandoned BU after this. It will probably be many days before we know since not all nodes will be attended to or monitored daily.
Looks like the number of nodes is back to the baseline of ~800. Both the second showstopper bug that was exploited and their embarrassingly bad attempt to cover it up by alleging core nodes were affected, and trying to fake the number of core nodes in existence look like less than exist have not perturbed them. https://www.reddit.com/r/Bitcoin/comments/5zhmwn/andrew_stones_bu_dev_fake_screenshot_is_a_poor/The fact that the number of BU nodes is unchanged in light of the above speaks volumes for the type of people running them...
|
|
|
This is one of the most fucked up public stunt screw ups yet.
|
|
|
Some of the nodes have come back online, presumably patched, but it's still under 400 on coin dance. I wonder how much time is needed to know how many have patched their clients and restarted them or have abandoned BU after this. It will probably be many days before we know since not all nodes will be attended to or monitored daily.
|
|
|
Looks like about 200 nodes have stayed online which are probably mostly patched clients, though it could be very persistent users repeatedly restarting their node (and least less likely sybil attacks.)
|
|
|
Well the pool hasn't solved any blocks of late so just for some interest here's poolbench which is showing block speed changes to a client in USA (slow to load): https://poolbench.antminer.link/Note solo pool being the fastest of the full verifying node pools, and that de is actually faster on change than main solo since the server upgrade. All the ones above it mine empty unverified blocks. Now all we need is a few blocks sprinkled around to make the most of that rapid block propagation/change...
|
|
|
For core version 0.12... why?
Maybe because Bitcoin Core 0.12 code is known by a very large number of people, so it can have a very good peer review. That's one good reason for using bitcoin core 0.12 code as a base for development ideas. The other is the fact that segwit has not been activated and may never be. So lets say someone spots a good opportunity for a code optimisation that does not affect the existing protocol of nodes. It can peer reviewed by more people. It can be integrated into other nodes such as core/classic/xt/bu easier by the developers who understand the differences from that code base. That doesn't make sense for the former reason; the latter (integration with other codebases forked off earlier) does. There are many improvements in 0.13 and 0.14 that are totally unrelated to segwit and if segwit never gets activated the segwit code goes untouched, so forking off 0.12 is just leaving us with ancient code. The problem is everyone's rushing to get on top of each other with new proposals and choosing the shortest cuts possible. Oh well stick around long enough and *someone* will probably create a BIP100 for current core...
|
|
|
For core version 0.12... why? Presumably to predate all segwit code and be compatible with BU? Since it's a hard fork proposal independent of segwit there is nothing that would prevent this from being made for core 0.14. BIP100 was the most popular choice amongst mining pools when it was first proposed. Don't be surprised if it makes a surprising comeback now. If this was coded up for core instead of BU without any other changes it would be very popular very quickly...
|
|
|
Indeed Haggs! And as I understand it it is trivial to increase the blocksize up to a max of 32MB since that's already coded in originally by Satoshi! Don't need all that extra nonsense with all the ulterior motives attached... 32MB was the limit of the message size between nodes. There was in fact no limit at all to block size, but the message size wouldn't cope with anything larger than 32MB. Satoshi was the one who imposed a 1MB limit later on to minimise DoS risk. I suggest you do some reading in the more general bitcoin/ and development and technical/ discussion sections of the forum to understand why a massive increase all alone like that would be a potentially unmitigated disaster for the network at this time for multiple reasons. Even an increase to 2MB alone is massively different to 1MB; it doesn't just carry with it 'double the work, storage and bandwidth'. Some things do not scale linearly at all to size unfortunately due to the original protocol design of bitcoin transaction verification. The original design was not perfect (nothing ever is) and only with the benefit of hindsight can we see the flawed components in the design and the code which was a monolithic code dump from Satoshi. However it's here to stay and we need to work with it. Assuming we need to scale, everything must be factored into the scaling approach. By the way, talk of BIP100 is back too, which was by far the most popular pool scaling choice at one stage, though currently they're trying to tie it in with BU: https://bitcointalk.org/index.php?topic=1822824.0
|
|
|
Keep in mind that ironically, almost no miner (besides Bitcoin.com) is actually running BU. They are just signalling it. There are also substantial performance improvements in core 0.13 and 0.14 that haven't made their way into the BU code so miners would lose all those benefits by abandoning the core client.
|
|
|
Isn't this the one where the author says in the BIP
"encoding transactions using XML with super short field names is THE most efficient data encoding possible, I mean, you can't make ANY data smaller than 1 character in text file, am-I-right?!"
I literally pissed myself laughing
Bah, he wasn't ambitious enough; he should have been aiming for 1 bit.
|
|
|
The number of sigspammers who posted absolutely nothing useful rehashing exactly the same pseudopost in this thread gives me a headache.
|
|
|
As core not done any research on this then?
I'm saying that you and I don't have adequate data, and no exact data in this thread. There was some article about a block that takes longer than 10 minutes to validate at 2 MB somewhere. https://rusty.ozlabs.org/?p=522
|
|
|
Oh look, only the 7th thread created about this
|
|
|
https://bitco.in/forum/threads/buip033-passed-parallel-validation.1545/For those unwilling to click through: BUIP033: Parallel Validation Proposer: Peter Tschipper Submitted on: 10/22/2016
Summary:
Essentially Parallel Validation is a simple concept. Rather than validating each block within the main processing thread, we instead create a separate thread to do the block validation. If more than one block arrives to be processed then we create yet another thread. There are currently up to 4 parallel block processing threads available making a big block DDOS attack impossible. Furthermore, if any attacker were somehow able to jam all 4 processing threads and another block arrived, then the processing for the largest block would be interrupted allowing the smaller block to proceed, unless the larger block or blocks have most proof of work. So only the most proof of work and smallest blocks will be allowed to finish in such as case.
If there are multiple blocks processing at the same time, when one of the blocks wins the race to complete, then the other threads of processing are interrupted and the winner will be able to update the UTXO and advance the chain tip. Although the other blocks that were interrupted will still be stored on disk in the event of a re-org.
...
Thanks I wasn't aware of that. Probably something worth offering in conjunction with BIP102 then.
|
|
|
Of course, if a persistent repeated sequence of such blocks were to be somehow mined back-to-back, that might slow transaction processing to a crawl*.
That is, if no other miner bothered to mine a competing block. Which, of course, is what a rational miner would do in such a situation. For then he would reap the rewards of a more-quickly validating block. (That would be the coinbase reward for solving a block).
The 'excessivity' solves itself. Through natural incentive of rational self-interest.
You keep talking about miners mining this more quickly validating block... there is no code currently that can try to validate two different blocks concurrently and pick the one that validates faster. The first one that comes in will be under validation while any other blocks come in wait before they can be validated so unless someone has a rewrite that does what you claim, the problem still exists. First block that hits will always win. No disrespect intended. But should excessively-long-to-validate blocks ever become significant, mining using an implementation that does not perform parallel validation is a guaranteed route to bankruptcy. "no code" - you sound pretty sure of yourself there. It may even be the case ... right up until the point in time that it is not. Right, there is no *public* code that I'm aware of, and I do hack on bitcoind for my own purposes, especially the mining components so I'm quite familiar with the code. As for "up until the point in time that it is not", well that's the direction *someone* should take with their code if they wish to not pursue other fixes for sigop scaling issues as a matter of priority then - if they wish to address the main reason core is against an instant block size increase. Also note that header first mining, which most Chinese pools do (AKA SPV/spy mining), and as proposed for BU, will have no idea what is in a block and can never choose the one with less sigops.
|
|
|
Of course, if a persistent repeated sequence of such blocks were to be somehow mined back-to-back, that might slow transaction processing to a crawl*.
That is, if no other miner bothered to mine a competing block. Which, of course, is what a rational miner would do in such a situation. For then he would reap the rewards of a more-quickly validating block. (That would be the coinbase reward for solving a block).
The 'excessivity' solves itself. Through natural incentive of rational self-interest.
You keep talking about miners mining this more quickly validating block... there is no code currently that can try to validate two different blocks concurrently and pick the one that validates faster. The first one that comes in will be under validation while any other blocks come in wait before they can be validated so unless someone has a rewrite that does what you claim, the problem still exists. First block that hits will always win.
|
|
|
Transactions yes, but in prune mode the 'network' flag will be disabled meaning you won't be advertising that other nodes can download older block information from you to sync up.
|
|
|
apparently, Sigop DDoS attack is possible now, because the Sigops per block limit is too high. Why isn't anyone using the attack then? Always assume that if a malicious vector exists then someone will try and exploit it From memory the closest we've come to that to date was that single transaction 1MB block from f2pool that took nodes up to 25 seconds to validate. It is possible to make it much worse, but newer versions of bitcoind (and probably faster node CPUs) would have brought that down. Rusty at the time estimated it could still take up to 11 seconds with 1MB: https://rusty.ozlabs.org/?p=522So yeah it has been used... possibly unwittingly at the time.
|
|
|
Code them up together, but allow each component to be activated *separately* thus allowing clients to choose which component they wish to support... I suspect support for BIP102 will be a lot higher now (yes I know about quadratic scaling issue.)
That certainly sounds like a good idea, if the community decides to support this proposal. Would Core allow to activate that kind of compromise proposal coded into a real pull request? Core would not because they're all convinced we must have segwit before increasing the block size to prevent a quadratic scaling sigop DDoS happening... though segwit doesn't change the sigops included in regular transactions, it only makes segwit transactions scale linearly which is why the blocksize increase proposal is still not on the hard roadmap for core as is. If block generation is biased against heavy sigop transactions in the core code (this does not need a consensus change, soft fork or hard fork) then pool operators would have to consciously try and include heavy sigop transactions intentionally in order to create a DDoS type block - would they do that? Always assume that if a malicious vector exists then someone will try and exploit it, though it would be very costly for a pool to risk slow block generation/orphanage by doing so.
|
|
|
|