Bitcoin Forum
May 25, 2024, 11:44:46 PM *
News: Latest Bitcoin Core release: 27.0 [Torrent]
 
  Home Help Search Login Register More  
  Show Posts
Pages: « 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 [16] 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 ... 288 »
301  Bitcoin / Development & Technical Discussion / Re: Taproot proposal on: June 01, 2021, 07:30:47 PM
the only risk is if a merchant/service/exchange is relying on the unupgraded pool Y as a chain source where by the merchant sees the temporary block and think its valid. and if a user that done (1) may have a oppertunity to double spend.
which is another reason why merchants should not accept 0-2confirms
Agreed, though the merchant in that case can be protected against an unupgraded miner by the merchant themselves upgrading.  If the merchant is upgraded they won't see the invalid confirmation.

A miner that intentionally produces an invalid block can cause non-validating clients to see a false confirmation is always true, not unique to taproot.  It is, however, a pretty expensive attack.

The high hashrate means that any invalid confirmations will go away quickly, and confirmations that go away after a block or two is already a thing people need to be prepared to handle due to reorgs.

So if you're upgraded you're protected by your upgrade, and if you're not-- you're still protected by the high hashpower.   If you're not validating at all, you're only protected by high hashpower, taproot or not.

One wrinkle in this is that right now many (most?) hashpower spends part of their mining time mining on non-validated work, because this is an easy 'optimization'.  I don't think it's a rationally justified optimization because you can get the same speedup from other approaches without losing validation but not validating has a low upfront cost.  This can cause short chains of invalid blocks where otherwise there would just be a single invalid block.  It really hurts the security assumptions of lite clients.  Fortunately, for the moment these non-validated blocks also contain no txn except for the coinbase, this isn't fundamental either but again its the easiest thing to do,  so wallets should probably not count chains of consecutive empty blocks as confirmations for confirmation counting purposes.

302  Bitcoin / Development & Technical Discussion / Re: What's the reason for not being strict about Taproot witness program size? on: May 30, 2021, 04:17:46 PM
It just leaves more of the witness V1 space open for other usages that have different sizes.  No need to needlessly close off extensibility.
303  Bitcoin / Development & Technical Discussion / Re: Taproot proposal on: May 29, 2021, 03:13:47 PM
Pre-Taproot nodes that support SegWit see the new witness version (version 1)
That is all correct, but I wanted to point out that the taproot spends are also non-standard under prior releases (e.g. even without segwit), just since your description might have made it sound like they weren't.
304  Bitcoin / Development & Technical Discussion / Re: Taproot proposal on: May 27, 2021, 10:22:00 PM
Up to that point if a majority of the miners downgraded (or never actually upgraded - just signalled) it would not manage to actually maintain the longest chain so if someone published a currently valid tx that was invalid under taproot we would have a chain split... (I think I am right on this?)
The purpose of the delayed activation is to give plenty of time for everyone to upgrade.

Bitcoin doesn't follow the longest chain, it follows the longest valid chain-- and at least to upgraded parties any taproot-invalid chain wouldn't be valid.  So the idea is that it doesn't even matter too much if such an invalid chain is created, as it'll just get ignored and the effect would just be some hashrate vanishing (until the losses cause them to wake up and fix their stuff Smiley).  And this holds no matter how much hashrate there is...

Of course, parties who aren't transacting and are asleep at the switch might not have upgraded but if they're not accepting transactions in realtime it doesn't much matter what chain they are on. Smiley

So I think that really takes the wind out of the chaos conspiracy theory. Tongue  But it's a good reason to get major exchanges and other influential parties that accept a lot of payments to confirm that they've upgraded prior to activation.

Also, "someone published" isn't sufficient.  Pre-taproot nodes (and miners) will not relay or mine taproot spends.  So the only way a block could be created with an invalid spend in it is if some miner intentionally did so (or at least lobotomized the protection out of their node), even if their software wasn't upgraded.
305  Bitcoin / Development & Technical Discussion / Re: Block explorers oligopoly. on: May 25, 2021, 02:28:30 PM
No one should be using blockexplorers: Using them trashes your privacy by leaking addresses you're interested to to third parties.

Your wallet should be providing all the monitoring of your own addresses that you need.

The blockstream block explorer is open source, so if you must use one-- you can run it yourself. ... if you can manage the resources.  Blockexplorers are fundamentally less scalable than Bitcoin is, so they're pretty much always going to end up centralized in practice.  This is another reason that its important to not use them.
306  Bitcoin / Development & Technical Discussion / Re: Sample taproot spending transaction on signet on: May 25, 2021, 02:25:09 PM
https://github.com/bitcoin/bitcoin/pull/22051
307  Alternate cryptocurrencies / Altcoin Discussion / Re: is there a way to stop rugpulls? on: May 23, 2021, 07:51:06 PM
Just don't use scam blockchains,  this isn't a practical issue for Bitcoin because it isn't a system by scammers, for scammers, with a user population that is by definition too ignorant to avoid scams.  Or ask your questions in an appropriate subforum...
308  Bitcoin / Development & Technical Discussion / Re: Bitcoind error undefined reference to `GetBlockValue(int, long long const&)' on: May 22, 2021, 11:23:24 PM
You're compiling an old, unsupported, and broken version.  The current version is 0.21.1 which is available at https://bitcoincore.org/en/download/

There was no GetBlockValue in 0.10.  You're either trying to get help with some altcoin and concealing it to waste Bitcoiner time, or you haven't run make clean first.  But regardless, you shouldn't be using that code.
309  Bitcoin / Development & Technical Discussion / Re: Taproot proposal on: May 21, 2021, 10:50:52 PM
It's normal for pools to have multiple servers.  Some have upgraded partially for risk reduction reasons, presumably they'll go ahead and upgrade the rest when they're confident that the upgrade is stable.

Taproot.watch now has a "Potential" field, which counts all partially signaling pools as fully signaling pools all the time (instead of only when their latest block signaled).  It currently reads 95.61% --- meaning if BTC.com & others who are partially upgraded fully upgrade before the next window it is extremely likely that it will lock in during that window.

310  Bitcoin / Development & Technical Discussion / Re: Proposal: including (UTXO) state hash in blocks (to eliminate IBD for new nodes) on: May 18, 2021, 01:10:12 AM
If you're happy blindly trusting miners, Bitcoin has had a name for that security model for a long time: SPV.  Feel free to use it.

Lots of people running the spv model has highly visible failure modes-- we see this already in ethereum: Major exchange configure their nodes if they get stuck to wipe their state and fast sync from the miners.  This means a majority miners can likely substantially override the system and ignore the rules which respect to some (if not almost all) of the major economic players.  The vulnerability is real although it hasn't been exploited yet.

Layering on stuff about querying multiple nodes just obfuscates the vulnerability it doesn't remove it.  If querying multiple nodes worked Bitcoin could have just used it instead of mining.  It's easy to spin up hundreds of thousands of fake nodes and spoof that kind of checking.

Assumeutxo doesn't blindly trust miners-- it requires that the software you're running is correct, which is already a requirement.  Like assumevalid an assumeutxo hash is much easier to audit than most software changes (just check a newly proposed value against your already running node).  Though as you mentioned there were suggestions on including the hashes in blocks for an additional independent verification of it.  The downside of the consensus commitment is that the format becomes normative and the latency of computing it becomes critical.  This would stifle innovation and create an need to have everything mostly perfect the first time through... not ideal.

Quote
It also makes it possible to significantly increase block size (since nobody needs to store the full-chain anymore, everyone can be a pruned node).
Block size is a concern for a lot more than just IBD time, and if actually validating the chain becomes infeasible then Bitcoin's core security guarantees are essentially lost.  Part of the reason for the block size is to create a market for space to generate fees to pay for security, so even with all other considerations aside (propagation, at tip resource usage, utxo set growth rate, etc.) you would still be left with the fact that without the resource constraint there is no mechanism to pay for security other than introducing perpetual inflation.

Quote
While the UTXO set is 4GB or so, a solution can be made to make it easier for nodes and miners to hash the 4GB UTXO set. Merkle Tree?
 You should read more of whats discussed.   Any kind of hash tree commitment is quite expensive to update.  E.g. increasing IO costs a factor of log(txouts) is not great when you're talking about a billion txouts.

The assumeutxo work uses a rolling multiset hash which has no such cost and if its not committed it doesn't put anything in the latency critical path.  Perhaps research it some more before dismissing it?  I think it's a lot more of a realistic near term improvement than anything you're suggesting!
311  Bitcoin / Development & Technical Discussion / Re: Taproot proposal on: May 14, 2021, 07:53:57 AM
FWIW, 22% of my peers claim to be running 0.21.1 or 21.99.0 (master), may be because I ban a massive amount of fake spy nodes which may be diluting the figures.  I also have tor inbounds which may tend to be more frequently updated.

(To be clear, I don't claim that this little sample is representative, just adding a bit of color)
312  Bitcoin / Development & Technical Discussion / Re: Taproot proposal on: May 14, 2021, 06:46:59 AM
As far as signalling goes we know the hashrate percentages but here is the node percentages:
According to https://bitnodes.io/ only 16.24% have upgraded to 0.20.1
According to https://luke.dashjr.org/programs/bitcoin/files/charts/software.html only 7.40% of all nodes upgraded to 0.20.1

If you are looking for a reason why miners aren't yet jumping on board in this first adjustment period maybe that's the reason.

If so, there might be a miscommunication.  The minimum activation height exists because it's known that it takes MONTHS for a substantial fraction of the network to upgrade even when there is some urgent bug.  Plus, the primary advantage of having soft forks triggered by a super-majority hash rate is that they're safe to activate even if relatively few nodes have upgraded (so long as the super-majority hashpower has upgraded).

Maybe it wasn't your point-- but there is no particular reason to wait for nodes.  And, actually 16.2% of listeners on 0.20.1 already sounds pretty fast to me!
313  Bitcoin / Bitcoin Discussion / Re: "Satoshi set the total supply of bitcoins to 21 Million" is nonsense on: May 14, 2021, 05:59:16 AM
I don't want to be pedantic, but there is a line that limits the units to 21,000,000 on amount.h:
Code:
static const CAmount MAX_MONEY = 21000000 * COIN;

I get your point, but that static constant does limit the maximum number of coins.
I want to be pedantic.  That line doesn't actually limit the maximum number of coins!

Only the halving of subsidy limits the maximum number of coins.

MAX_MONEY is used in various places to clamp transaction amounts. For example, a transaction can't pay an output more than MAX_MONEY coins (as that would just be impossible, so any number over MAX_MONEY is invalid).  This is just as odolvlobo says.

If you broke the halving but left MAX_MONEY in place the system could get more than 21m BTC, though you might trigger bugs and crashes if more than 21M got spent at once in a single transaction.

This is deep in the weeds and inconsequential,  but since we were already being pedantic we ought not be half-assed about it.  Only full-assed pedantry for me!

All that said, saying Satoshi set the limit to 21M BTC is accurate enough in my view.  I mean, if you want to get really detailed he pressed buttons that commanded logic gates by switching electrons ... and so on.  Shall we really say that Satoshi's mother set the limit by giving birth to Satoshi, or her mother by giving birth to her, or the limit was set when some ape that reached out and touched a black monolith giving birth to intelligent life and the inevitable creation of Bitcoin?   At some point you've got to approximate. Saying Satoshi set a specific limit gets the point across without getting into the weeds, unless the subsidy decline is actually relevant to your conversation. Tongue

Moreover, if you start talking about to newbies about rewards going down they often end up thinking mining becomes slower and slower and eventually blocks won't be created anymore.  Giving people too much detail often confuses them. Smiley
314  Bitcoin / Bitcoin Discussion / Re: Contrary to popular belief blockchains can make external web calls on: May 13, 2021, 09:09:48 AM
Signed party provides a signature (or other verifiable data) is a model that has been used for years in Bitcoin and elsewhere, it's not novel.

No one ever suggested that you couldn't do that.  It just isn't terribly valuable because your security still rests on the third party-- and if they were really that trustworthy why not eschew the blockchain and just have them process the transaction? There can be reasons, but it's those reasons that justify using such approaches...
315  Bitcoin / Development & Technical Discussion / Re: Are there any benchmark about Bitcoin full node client resource usage? on: May 10, 2021, 02:12:01 PM
But they are actually "grouped or something"
They aren't the mempool package stuff makes groups, but in blocks transactions aren't grouped in any way, the ordering is arbitrary beyond being topological.  But thanks for validating my point that applying term used for actual grouping in the mempool would lead to confusion.

Quote
and there is no "dependent transaction" in bitcoin literature.
LMGTFY.

Quote
Firstly, it is actually N^2/2 for the worst case.
One doesn't generally write in the constant factors when discussing asymptotic behavior because the true constant factors depend on the exact operations performed.  Usually it's sufficient to just think in terms of the highest exponent.

Quote
Secondly, it is not how we design algorithms for real world problems, being too much concerned about a case that simply doesn't happen
It is when you write systems where reliability is important or when they must resist attack.  As much as possible Bitcoin shouldn't have any quadratic operations in validation,  even if one is "probably safe most of the time" it just wastes effort analyzing the attack potential (and risk of getting it wrong or some other change violating the safety assumptions).

In your case there is no just justification for the quadratic behavior as the same processing can be accomplished without it.  If it were a question of trade-off that would be one thing, but it isn't.  The quadratic behavior exist in your proposal purely because you're out of your depth.  I even explained how to fix it, but you didn't notice because you're too invested in claiming everyone else is wrong.

Quote
Oppositely, it is the current implementation of block verification that imposes an ordering requirement, my algorithm even doesn't enforce parent txns to show up earlier in the block. It has nothing to do with the order of transactions being dependent (your term) or not, once a transaction fails because of its input(s) not being present in the UTXO, it 'll be re-scheduled for a retry as long as at least one more confirmed transaction is added to the set, I don't even look at the txn's index in the block, though, it may be necessary because of consensus, if so, I can keep track of the index, waiting for a confirmed txn with lower index to retry.
You've missed my point. Yes, consensus requires that txn in blocks are in topological order, but that doesn't mean you need to process them that way.  The fact that your suggestion won't actually validate a transaction until it has validated its parents is unnecessary and inefficient, as is its retry and fail approach, instead of picking things up again when it actually can make progress.   Consider what your algorithm would do when all the transactions were dependent on the one before them.  It would process them all sequentially without any parallelism at all *at best* (and have quadratic behavior at worst).  Meanwhile, in that same case Bitcoin does the majority of the processing entirely in parallel.  One needs to have collected the outputs from the prior transactions to use them later, but one need not have validated any of the prior transactions.

Maybe if you'd spare us a minute of lecturing us on what a brilliant architect you are-- and how Bitcoin developers are evil idiots, especially me even though I haven't been a bitcoin developer for years-- you might actually learn something.

But who am I kidding?

If you can't manage to notice that you never accomplish anything but raging on a forum while the people you constantly call idiots accomplish stuff you're probably not going to learn anything.
316  Bitcoin / Development & Technical Discussion / Re: Are there any benchmark about Bitcoin full node client resource usage? on: May 09, 2021, 09:41:25 PM
Besides the fact that throughout the source code, in the comments, transactions with dependent inputs are referenced using this term, I tried but didn't find any other term to use for such "packaged" transactions when they are present in a block, it'll be highly appreciated if you would give me one.

100% of the use of the word is in the context of the mempool and tx selection.  As mentioned 'packages' were introduced as part of child-pays-for-parent mining.  Blocks have always contained transactions whos parents were in the same block from day one, the package concept in the mempool was introduced so that it could include high feerate children when the parents wouldn't otherwise make it into the blocks.

There are simply no packages in blocks or in block validation.  The term you want is just *transaction*, maybe child transactions, or "transactions with unconfirmed parents".  There really isn't a term for what you're looking for because supporting the case of child txn in the same blocks just works in Bitcoin with no special additional code, so it's never been a separate freestanding thing that needed a name.

Calling that packages would be inappropriate because it suggests they're in some way grouped or something-- but they're not.  A dependent transaction can be anywhere in the block so long as it's after its parents.

Quote

So, it is not about parallel transaction processing because other than their input scripts, transaction are processed sequentially.
The validation of a transaction is almost exclusively the validation of its inputs.  Outputs are only checked for size and total amount, nlocktime is an integer comparison, as is the maximum weight. For these kinds of checks bouncing the cacheline over to another thread would take more time than the check.

Unless the signatures are cached the signatures take almost all the time. And when the signatures are cached the inputs almost always are (unless the coinscache was recently flushed), in cases when everything is cached memory bandwidth (e.g. against sync overheads and copying) is probably the limiting factor... particularly because the validity of entire transactions is also cached and if those lookups are all cache hits then the only validation that happens is nlocktimes and checking for conflicting spends, which is absurdly fast.  If you look at the bench numbers I posted in the other thread you'll see that with cold caches validation took 725ms (and that was on an 18 core system, though POWER so its threading is higher overhead) while the hot cache example was 73ms.  So clearly performance considerations are already dominated by worst cases, not average cases as they're already MUCH faster than the worst case.

It could queue the entire transactions rather than the individual inputs but this would achieve worse parallelism e.g. because a block pretty reliably will have >6000 inputs these days, but may have anywhere between 500 and 2000 txn just the same.  At least at the time that was written working tx at a time was slower per my chat logs. That might not be true anymore because more of the workload is full blocks, because more recent cpus suffer more from overhead, etc. This was the sort of thing I was referring to in my prior post when I said that less parallelism might turn out to be faster now, due to overheads.

Quote
BTW, I don't get it, how "using all the core" is enough for justifying your idea
Because if its hitting a load average equal to the number of cores it's using all the available resources so adding more parallelism won't make it faster (and if it's not quite equal but a little lower that bounds the speedup that can be had by more parallelism).  Of course, the load could be substantially overhead so something with a high load avg could be made faster-- but it would be made faster by reducing overheads.  Nothing specific you've said addresses overhead, your focus is exclusively on parallelism and your suggested algorithm would likely have extremely high overheads.

But most critically: It shows that the validation is multithreaded, which was the entire point of my message.

Quote
Few weeks ago someone in this forum told something about block verification in bitcoin being sequential because of "packaged transactions" (take it easy).
Turns out that people around here often have mistaken beliefs.  Among other sources they get propagated by altcoin scammers that claim to have fixed "problems" that don't even exist.  But just because someone was mistaken about something that is no reason to go on at length repeating the error, especially when someone shows up, with citations, to show otherwise.

The validation is multi-threaded. Period.  Stop claiming that the validation is single threaded simply because it's not inconceivable that it could be made faster or more parallel.

Quote
I proposed an algorithm for this and the discussion went on in other directions ...
Your proposed algorithm would almost certainly not work well in practice:  Every one iterations requires multiple read/write accesses to global data (the set of existing outputs).  Also in the worst case it would have to attempt to validate all transactions N^2 times: consider what happens if all transactions are dependent on the transaction before them and the parallel selection is unlucky so that it attempts the first transaction last.  It would process and fail every transaction except the first, then process every transaction except the first and fail every one except the second and so on. More commonly each thread would just end up spending all its time waiting for control of the cacheline it needs to check for its inputs.

It's just trying to impose too strong an ordering requirement, as there is no need to validate in order. Bitcoin core doesn't today. Deserialization must be in-order because the encoding makes anything else effectively impossible. And the consensus rules require that outputs exist before they're consumed but that can be (and is) satisfied without ordering validation by first gathering up all the new outputs before resolving the inputs from txn that might consume them.

Quote
Additionally, I noted the excessive use of locks and became concerned about their impact on parallelism.
What use of locks did you note?  Search in the browser in that thread shows the word lock never comes up.  And during the input validation the only mutable data that gets accessed is the signature cache which has been lock free since 2016, so the only lock the validation threads should even wait on is the one protecting their workqueue.

Quote
Honestly, this discussion didn't add anything to my knowledge about the code and bitcoin
Doesn't seem that much of anything ever does...

317  Bitcoin / Development & Technical Discussion / Re: Are there any benchmark about Bitcoin full node client resource usage? on: May 09, 2021, 06:53:14 PM
Your claim about transaction packages" not being relevant here is totally false and laughable
There is no such thing as 'transaction packages' in blocks from the perspective of validation.

Packages are a concept used in selecting transactions to mine so that low-fee ancestor transactions will be selected in order to mine their high fee children. They don't exist elsewhere.

Quote
Quote
The verification of transactions runs in parallel with everything else. One thread loads transactions from the block into a queue of transactions that need to be validated, other threads pull transactions from the queue and validate them.  When the main thread is done loading the queue, it too joins into the validation which has been in progress the whole time.  There is nothing particularly fancy about this.
Absolutely false and misleading. The code does not "load" transactions to "queue of transactions" you are deliberately misrepresenting the code for some mysterious purpose that I don't understand.

Sure it does. That is exactly how it works.  The validation loop iterates over each transaction and each input in each transaction, and for each one it loads it into a queue.  Concurrently, background threads take work from the queue to validate it.

When the master thread is done loading work it also joins the processing until the queue is empty (if it isn't already empty by the time it gets there).

Quote
In the real world, the real bitcoin core client, does NOT validate transactions in parallel,
It does. Since you're non-technical I don't expect you to read the code to check for yourself, but you can simply run reindex on a node with -assumevalid set to false.  You'll see that once it gets up to 2014 or so that it is using all the cores.

Quote
because in bitcoin there is a possibility for transactions to be chained in a single block, forming a transaction package, hence you can't simply "dispatch"  txns of a block between threads waiting for them to join, it is why block verification was implemented single thread.

Transactions in a block are required by the consensus rules to be topologically ordered.  That means that all the ancestors of a transaction come first.  There is no concept of a 'package' in a block.

When the validation is iterating through the block to load the validation queues it saves the new outputs created by each (as of yet unvalidated) transaction, so that they're available when dispatching the work off for any future transactions that consume them.  They don't have to be validated before other transactions can consume them, because if there is any invalidity anywhere the whole block will be invalid.

So you can have e.g. an invalid TxA whos outputs are spent by valid TxB whos outputs are spent by valid txC,  and its perfectly fine that the validation accepts TxB and TxC before later detecting that TxA is invalid.  A's invalidity will trigger the rejection of the block.

Extracting the outputs for other transactions to use does require a linear pass through the transactions but it's fairly inexpensive and doesn't require any validation.  It is required in any case because the block serialization can only be decoded in-order too (because you can't tell where transaction 2 begins until you've parsed transaction 1-- the format doesn't have explicit lengths).

Similarly, checking if a UTXO has already been consumed is also inherently somewhat sequential (e.g. consider when the 5th and 50th txn both spend the same input), but these checks are cheap.  Often attempts to make processes like that more parallel just slow them down because of the synchronization overheads.  That's why it's not possible to give much in the way of useful parallelism advice without testing.

Quote
PR 2060 was an improvement, based on "deferring" the CPU intensive part of the task, i.e. script verification by queuing this part for future parallel processing.
It's not deferred, it's queued to different threads and run in parallel.

Quote
It was good but not a complete scaling solution because the main block validation process remaining single thread, occasionally waits for UTXO checks, so, we don't get linear improvement in terms of block processing times with installing more cpus/cores. Period.
You essentially never get linear improvement with more cores, as memory bandwidth and disk bandwidth don't scale with them and there are synchronization overheads. ... so that statement is essentially empty.

You've now moved the goal post entirely.  In the original thread NotATether misadvised ETFbitcoin that to benchmark validation he only needed to use a single core because 'verification is single threaded'.  I stepped in to point out that it's been parallel since 2012.  Benchmarking just a single core isn't representative as a result.

Your reply-- basically doubling down and defending misinformation you spread in the past-- is essentially off-topic.  If you want to argue that the parallelism isn't perfect, sure it's not-- it pretty much never is except for embarrassingly parallel tasks.  And surely there are opportunities to improve it but they're not likely to be correctly identified from the armchair of someone who hasn't tried implementing them or benchmarking them.  Your response is full of criticisms essentially copy and pasted from the benchmarks and authors comments about the limitations back in 2012, but the code in Bitcoin has continued to advance since 2012.

But absolutely none of that has anything to do with a person saying that its single threaded (it isn't) so you can just benchmark on a single core (you can't, at least not with representative results).

Ironically, if the parallelism actually were perfect then benchmarking on a single thread would again be a more reasonable thing to do (because you could just multiply up the performance).

Quote
Quote
When blocks contain only a few inputs this limits parallelism, but once blocks have a couple times more inputs than you have cores it achieves full parallelism.  
Absolutely false and misleading. I have not checked, but I suppose in 2012 most blocks had  ways more than "a couple times more inputs" than an average node's cpu cores.
In 2012 most blocks were created in 2010 or before and had few transactions at all.  Today most blocks were created in 2015 or later and the nearly empty blocks early in the chain are a rounding error in the sync time, and so most blocks have a great many inputs and make good use of multiple cores.  Could it be even better?  Almost certainly, though many things have been tried and found to not pay off... and other things have turned out to be so complicated that people haven't gotten far enough to benchmark them to see if they'll help yet.  I wouldn't be too surprised to learn that there was more speedup to be had from reducing synchronization overheads at the *expense* of parallelism-- esp since as cpu performance increases the relative cost of synchronization tends to increase (mostly because memory bandwidth has increased slower than cpu speeds have).

Regardless, my comment that it is parallel and has been since 2012 in no way suggests that the parallelism is magically perfect.

Your interactions have been cited to me multiple times by actual experts as to why they don't post here entirely.  It really makes it unenjoyable to post here to know that so predictably a simple correction of a factual error with a cite will generate a rabid gish-gallop substantially off-topic response defending some bit of misleading info some poster had previously provided.  It makes the forum worse for everyone and there is just no need for it.

Quote
Reading comments like this, I'm realizing more and more that bitcoin is overtaken by junior programmers who do not miss any occasion for bragging with their
Essentially no one who programs bitcoin comments in this subforum anymore when in the past almost all of them did.  I believe the only remaining person is Achow and he comments fairly infrequently.  Responses like your make them not bother.
318  Bitcoin / Development & Technical Discussion / Re: Are there any benchmark about Bitcoin full node client resource usage? on: May 09, 2021, 03:14:57 PM
Alright, this is going to open another can of worms because I'm not sure how execution cap handles multiple cores. But on the plus side it looks like all your benchmark has to do is run bitcoin core with -reindex and then measure the time it takes to finish from debug.log and also using stuff like top to keep track of resource usage. But automatic profiling with systat where the metrics are stored in other log files is better IMO.

I don't think those cpu percentage limits are going to be useful for much-- I doubt they result in a repeatable measurement.

A better way to do a benchmark to get a tx/s figure is to use invalidateblock to roll back the chain 1000 blocks or so, then restart the process to flush the signature caches and reconsiderblock and collect data from that.

If you enable the bench debugging option you'll get data like this:

Quote
2021-05-01T19:10:02.540246Z received block 0000000000000000000922bf7fce4f900d7696f0c1c7221f97d3f367fdd9c44d peer=0
2021-05-01T19:10:02.553389Z   - Load block from disk: 0.00ms [0.00s]
2021-05-01T19:10:02.553414Z     - Sanity checks: 0.00ms [0.00s (0.00ms/blk)]
2021-05-01T19:10:02.553449Z     - Fork checks: 0.04ms [0.00s (0.04ms/blk)]
2021-05-01T19:10:03.255699Z       - Connect 2532 transactions: 702.21ms (0.277ms/tx, 0.116ms/txin) [0.70s (702.21ms/blk)]
2021-05-01T19:10:03.255837Z     - Verify 6043 txins: 702.38ms (0.116ms/txin) [0.70s (702.38ms/blk)]
2021-05-01T19:10:03.265095Z     - Index writing: 9.26ms [0.01s (9.26ms/blk)]
2021-05-01T19:10:03.265110Z     - Callbacks: 0.02ms [0.00s (0.02ms/blk)]
2021-05-01T19:10:03.265490Z   - Connect total: 712.10ms [0.71s (712.10ms/blk)]
2021-05-01T19:10:03.270861Z   - Flush: 5.37ms [0.01s (5.37ms/blk)]
2021-05-01T19:10:03.270885Z   - Writing chainstate: 0.03ms [0.00s (0.03ms/blk)]
2021-05-01T19:10:03.278491Z UpdateTip: new best=0000000000000000000922bf7fce4f900d7696f0c1c7221f97d3f367fdd9c44d height=681059 version=0x20800000 log2_work=92.840892 tx=637825747 date='2021-04-29T05:56:20Z' progress=0.998789 cache=2.3MiB(17699txo)
2021-05-01T19:10:03.278523Z   - Connect postprocess: 7.64ms [0.01s (7.64ms/blk)]
2021-05-01T19:10:03.278540Z - Connect block: 725.14ms [0.73s (725.14ms/blk)]

Unfortunately any kind of reindex or cold cache benchmark only tells you about the performance while catching up.

During normal operation there is normally no validation of transactions at all when a block is accepted, or only a couple-- they've already been validated when they were previously relayed on the network.

This is obvious when you look at the performance of blocks after a node has been running for a while:

Quote
2021-05-09T14:11:43.013649Z received: cmpctblock (10734 bytes) peer=14002
2021-05-09T14:11:43.017889Z Initialized PartiallyDownloadedBlock for block 0000000000000000000ccd134daad627f62fbb52258fbc400220cbcd7cd38639 using a cmpctblock of size 10734
2021-05-09T14:11:43.018046Z received: blocktxn (33 bytes) peer=14002
2021-05-09T14:11:43.023885Z Successfully reconstructed block 0000000000000000000ccd134daad627f62fbb52258fbc400220cbcd7cd38639 with 1 txn prefilled, 1715 txn from mempool (incl at least 0 from extra pool) and 0 txn requested
2021-05-09T14:11:43.028245Z PeerManager::NewPoWValidBlock sending header-and-ids 0000000000000000000ccd134daad627f62fbb52258fbc400220cbcd7cd38639 to peer=4
2021-05-09T14:11:43.029259Z sending cmpctblock (10734 bytes) peer=4
[...]
2021-05-09T14:11:43.032630Z sending cmpctblock (10734 bytes) peer=31588
2021-05-09T14:11:43.044382Z   - Load block from disk: 0.00ms [7.36s]
2021-05-09T14:11:43.044427Z     - Sanity checks: 0.01ms [1.48s (0.80ms/blk)]
2021-05-09T14:11:43.044492Z     - Fork checks: 0.07ms [0.10s (0.05ms/blk)]
2021-05-09T14:11:43.068471Z       - Connect 1716 transactions: 23.96ms (0.014ms/tx, 0.004ms/txin) [157.68s (84.87ms/blk)]
2021-05-09T14:11:43.068508Z     - Verify 6370 txins: 24.01ms (0.004ms/txin) [159.77s (85.99ms/blk)]
2021-05-09T14:11:43.081081Z     - Index writing: 12.57ms [18.65s (10.04ms/blk)]
2021-05-09T14:11:43.081107Z     - Callbacks: 0.03ms [0.05s (0.02ms/blk)]
2021-05-09T14:11:43.081346Z   - Connect total: 36.97ms [177.38s (95.47ms/blk)]
2021-05-09T14:11:43.092634Z   - Flush: 11.29ms [15.98s (8.60ms/blk)]
2021-05-09T14:11:43.092672Z   - Writing chainstate: 0.04ms [0.09s (0.05ms/blk)]
2021-05-09T14:11:43.117336Z UpdateTip: new best=0000000000000000000ccd134daad627f62fbb52258fbc400220cbcd7cd38639 height=682762 version=0x20000004 log2_work=92.865917 tx=640740048 date='2021-05-09T14:11:34Z' progress=1.000000 cache=196.6MiB(1235174txo)
2021-05-09T14:11:43.117376Z   - Connect postprocess: 24.71ms [42.18s (22.70ms/blk)]
2021-05-09T14:11:43.117393Z - Connect block: 73.01ms [242.98s (130.77ms/blk)]

So in that number you see that it spent 24.01ms verifying 6370, compared to the earlier cold cache example that spent  702.38ms verifying fewer (6043) txins.

Depending on what you're considering, that faster on-tip performance doesn't matter because a miner could fill their block we new, never before seen txn even ones constructed to be expensive to verify-- it's not the worst case.  The worst case can only really be characterized by making special test blocks that intentionally trigger the most expensive costs.

319  Bitcoin / Development & Technical Discussion / Are there any benchmark about Bitcoin full node client resource usage? on: May 09, 2021, 02:44:55 PM
I'm afraid, this PR wasn't helpful enough for utilizing modern multicore CPUs:
[...]  In practice, it has been experimentally shown (and even discussed in the PR 2060 related comments) that the overhead of the scheme rapidly outperforms gains once it is configured for utilizing more threads than the actual CPU cores and it stays around 30%

Of course something stops getting more gains if it uses more threads than CPU cores.  There are only so many cores and once you're using them entirely adding more threads just adds overheads.

The claim that transaction validation is single threaded is just false and that is all my reply was pointing out. That PR is from 2012-- I linked it to show just how long tx processing has been in parallel, there have been subsequent changes that further improved the performance further.

Quote
Firstly it is NOT about parallel transaction verification, instead it postpones script verification by queuing it for a future multi thread processing which is called right before ending the block verification that is done single threaded. This thread joins to the final multi-thread script processing.
The logic behind such a sophisticated scheme apparently has something to do with the complexities produced by transaction packages where we have transactions with fresh inputs that are outputs of other transactions in the same block.

Stop engaging in abusive technobabbling.  There is no interaction with "transaction packages" or anything of the sort there.

The verification of transactions in blocks runs in parallel with everything else. One thread loads transactions from the block into a queue of transactions that need to be validated, other threads pull transactions from the queue and validate them.  When the main thread is done loading the queue, it too joins into the validation which has been in progress the whole time.  There is nothing particularly fancy about this.

Quote
Secondly, it uses cs_main lock excessively, keeping the lock constantly active through the script verification itself, hence leaving little room for parallelism to have a scalable impact on the block verification process.
All you're doing is copying from the PR summary, but it doesn't even sound like you understand what the words mean.  What the PR is saying is that it holds cs_main until the block finished validating, so that the user never sees a incompletely validated block.  When blocks contain only a few inputs this limits parallelism, but once blocks have a couple times more inputs than you have cores it achieves full parallelism.  Remember again, this was written in 2012 when most of the chain had only a few transactions per block, so the fact that it was parallel for only a block at a time was a limitation then-- it's not much of a limitation now.  Today a typical block contains over 6000 inputs, so there is ample parallelism even when limited to a single block at a time.

Of course, there is always room for improvement.  But one improves things by actually understanding them, writing code, and testing it.  Not by slinging verbal abuse on a forum in a desperate attempt to impress the peanut gallery, so I guess you wouldn't know anything about that. Cheesy
320  Bitcoin / Development & Technical Discussion / Re: Taproot proposal on: May 07, 2021, 04:19:55 PM
With most of the Bitcoin Core developers remaining neutral,
You say a lot of weird stuff. Smiley  Bitcoin Core developers pushed this out, they're not "remaining neutral", that would be a terrible abdication of their responsibility as technical experts.
Pages: « 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 [16] 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 ... 288 »
Powered by MySQL Powered by PHP Powered by SMF 1.1.19 | SMF © 2006-2009, Simple Machines Valid XHTML 1.0! Valid CSS!