as you know, even Gavin talks about this memory problem from UTXO. and yes, i read the Reddit thread that resulted in which you participated and i'm aware that UTXO can be dynamically cached according to needs.http://gavinandresen.ninja/utxo-uhoh
Gavin was insufficently precise. There is a reddit thread is full of people calling gavin a fool (
) for saying "memory" when he should have been saying fast storage. https://twitter.com/petertoddbtc/status/596710423094788097
Why do you think it's prudent to argue this with me?
Okay, lets take a bet. Since you're so confident; surely you'll grant me 1000:1 odds?-- I'll give my side away to a public cause.
The question is "Is the entire UTXO set kept in ram in Bitcoin Core ever released?"
I will bet 3 BTC and, with the 1000:1 odds, if you lose you'll pay 3000 BTC (which I will to the hashfast liquidators, to return it to the forum members that it was taken from; which will also save you some money in ongoing lawsuit against you).
Sounds good? How will we adjudicate? If not, what is your counter-offer for the terms?
i didn't say this full block spam attack we're undergoing wasn't affecting my node at_all. sure, i'm in swap, b/c of the huge #unconf tx's but it hasn't shut down or stressed my nodes to any degree. one of the arguments by Cripplecoiners was that these large block attacks would shut full nodes down from destabilization resulting in centralization. i'm not seeing that.
The highest number of unconfirmed transactions I've seen ever is about 8MB. Even if we assume the real max was 3x that this is not explaining your hundreds of megabytes of swap. We just had half the hashpower of the network mining without validating creating multiple large forks and large reorginizations, but you don't see any destabilization. Okay.
Let me chime in hear quickly, because I think Greg and I are talking about slightly different things. My model was considering the time between the first moment that a pool could begin hashing on a blockheader, and when the previous block had been processed, a new non-empty block template constructed, and the hashers re-assigned to work on this non-empty block.
It looks like this time, empirically, is 15 sec (F2Pool) and 30 sec (AntPool), based on these estimates
Here I suspect you're suffering from an excess of empiracisism without adequately devling into the mechenism. You can directly measure that time time from input to minable on an actual node under your control and will observe the time is hundreds of times faster than your estimate. Why? Miners don't magically know when their pool has new work, they'll get work in the first milliseconds and then grind on it some time before submitting returning work. Even if the pool long polls them, it takes time to replace work. So what I suspect you're actually measuring there is the latency of the mining process... which is consistent with what we've expirenced with P2Pool (5-20 second latencies from ASIC miners are common).
I noted you posted a result of a classification, did you run the same data through a simple logistic regression with prior size as the treatment? The intercept in the model would be interesting.
But indeed, these conversations have been conflating several seperate issues (latency vs throughput, etc.). Tricky to avoid that since they're all relevant.
but you haven't verified that f2pool or Antpool has increased their minrelaytxfee have you to minimize their mempool?
I have, they'd previously cranked it down, and were producing small blocks and were flamed in public. They've since turned it back up.
remember, this whole mempool discussion was based off you responding to Peter's mathematics post the other day where you argued that the block verification times were only 80ms for a 250 kB block b/c tx's had been pre-verified after being passed around to all nodes across the network and didn't require re-verification by miners on the relay network and was therefore a refutation of his hypothesis of increasing block verification times (16-37sec on avg) leading to SPV mining.
As PeterR points out, they only need to wait for verification to actually verify (which they're not doing today), though they may have to wait longer to include transactions---- though I point out thats not fundimental e.g. no matter how big the backlog is you can produce a template sufficient to completely fill a block while doing no more work than handling a mempool of twice the maximum block size. (by using a tiered mempool, though no one has bothered to implement this yet-- no one has even been complaining about how long createnewblock takes, due to the ability to produce empty blocks without skipping transactions).