iCEBREAKER
Legendary
Offline
Activity: 2156
Merit: 1072
Crypto is the separation of Power and State.
|
|
July 04, 2015, 06:07:59 PM |
|
Mining on headers wasn't a problem before and it isn't now, outside of Ant/f2's (intentionally?) broken/ lossy implementation. The little BIP66 hiccup just drew attention to the existence of the trade-off between larger blocks and more frequent subsequent 0tx blocks. We know "why" miners (whether Chinese or not) try to mine empty blocks while they verify incoming tx - because it is profitable. Mining 0tx blocks helps cancel out orphans and if you don't do it, the other guys will. Care to comment on the observed "95%" compliance in reality translating to 64%? Please, try to spin that as A Good Thing for GavinCoin's desired 75% threshold. 1MB full blocks ARE in fact causing problems at multiple levels ALREADY. what you're seeing are compensation mechanisms that various actors are now forced to employ which is bad for the system in general as it is causing monetary losses; BTC block rewards for miners reversed, cancelled tx's for users and merchants. how this is not obvious is beyond me. as for the 95% compliance goof, i'll need more info for that one as that is more of a damnation for versioning in general than anything specific for Gavin's 75% proposal. i thought that the versioning is displayed in blocks ONLY IF the miners have already deployed the software upgrade? "Compensation mechanisms?" Compensation for orphans - yes; compensation for full blocks - no. The relative fullness % of the blocks doesn't matter; it's their absolute size which supervenes upon time to verify. Miners mining on headers-only to create empty blocks isn't anything new. You seem to have just suddenly discovered this trade-off activity exists, and are suffering under the delusion it has only recently begun in response to less-empty blocks. Verifying 1MB blocks already tests the limits of what CPUs (and thus, as gmax points out, the BTC network) can handle. 8MB blocks will take 8X longer to process (or more, as RAM spills over to swap files), and we'll have 8X more empty blocks (or more, as the % of miners not SPV mining tends towards zero). You're right that before we even consider GavinCoin's 75% proposal, we need more info on exactly how/why the "95% compliance goof" damns versioning in general.This goof was a non-event, but then again BIP66 wasn't a controversial, drama and chaos inducing, partisan trench war à la GavinCoin.
|
██████████ ██████████████████ ██████████████████████ ██████████████████████████ ████████████████████████████ ██████████████████████████████ ████████████████████████████████ ████████████████████████████████ ██████████████████████████████████ ██████████████████████████████████ ██████████████████████████████████ ██████████████████████████████████ ██████████████████████████████████ ████████████████████████████████ ██████████████ ██████████████ ████████████████████████████ ██████████████████████████ ██████████████████████ ██████████████████ ██████████ Monero
|
| "The difference between bad and well-developed digital cash will determine whether we have a dictatorship or a real democracy." David Chaum 1996 "Fungibility provides privacy as a side effect." Adam Back 2014
|
| | |
|
|
|
cypherdoc (OP)
Legendary
Offline
Activity: 1764
Merit: 1002
|
|
July 04, 2015, 06:19:07 PM |
|
Mining on headers wasn't a problem before and it isn't now, outside of Ant/f2's (intentionally?) broken/ lossy implementation. The little BIP66 hiccup just drew attention to the existence of the trade-off between larger blocks and more frequent subsequent 0tx blocks. We know "why" miners (whether Chinese or not) try to mine empty blocks while they verify incoming tx - because it is profitable. Mining 0tx blocks helps cancel out orphans and if you don't do it, the other guys will. Care to comment on the observed "95%" compliance in reality translating to 64%? Please, try to spin that as A Good Thing for GavinCoin's desired 75% threshold. 1MB full blocks ARE in fact causing problems at multiple levels ALREADY. what you're seeing are compensation mechanisms that various actors are now forced to employ which is bad for the system in general as it is causing monetary losses; BTC block rewards for miners reversed, cancelled tx's for users and merchants. how this is not obvious is beyond me. as for the 95% compliance goof, i'll need more info for that one as that is more of a damnation for versioning in general than anything specific for Gavin's 75% proposal. i thought that the versioning is displayed in blocks ONLY IF the miners have already deployed the software upgrade? "Compensation mechanisms?" Compensation for orphans - yes; compensation for full blocks - no. The relative fullness % of the blocks doesn't matter; it's their absolute size which supervenes upon time to verify. large blocks=full blocks in this context; that's why the miners are compensating via SPV mining. they said just that and the evidence is that they are doing what they said they'd do.
Miners mining on headers-only to create empty blocks isn't anything new. You seem to have just suddenly discovered this trade-off activity exists, and are suffering under the delusion it has only recently begun in response to less-empty blocks.
of course, it's been done sparingly in the past; but now it has become a regular thing b/c for the first time, we have full blocks. Verifying 1MB blocks already tests the limits of what CPUs (and thus, as gmax points out, the BTC network) can handle. 8MB blocks will take 8X longer to process (or more, as RAM spills over to swap files), and we'll have 8X more empty blocks (or more, as the % of miners not SPV mining tends towards zero).
once again, you assume we'll go to 8MB immediately. no way. only organic growth that will take time will take it that high. and then, if we continue to have an artificial cap in place that causes full blocks to potentially become a reality, the spammers will once again identify an opportunity and target to disrupt user growth by causing the same shit they are now at the expense of very little spam in aggregate. You're right that before we even consider GavinCoin's 75% proposal, we need more info on exactly how/why the "95% compliance goof" damns versioning in general.
This goof was a non-event, but then again BIP66 wasn't a controversial, drama and chaos inducing, partisan trench war à la GavinCoin.
you need an eye exam. b/c you are blind. this type of confidence reducing event will continue to erode user and merchant growth rate. not to mention the price. and this just proves my contention that the chokers will never be able to identify when we are being choked.
|
|
|
|
iCEBREAKER
Legendary
Offline
Activity: 2156
Merit: 1072
Crypto is the separation of Power and State.
|
|
July 04, 2015, 06:54:56 PM |
|
of course, it's been done sparingly in the past; but now it has become a regular thing b/c for the first time, we have full blocks.
you need an eye exam. b/c you are blind. this type of confidence reducing event will continue to erode user and merchant growth rate. not to mention the price.
and this just proves my contention that the chokers will never be able to identify when we are being choked.
Large miners have been SPV mining for years. IDK where you get this "sparingly" vs "regular" nonsense from. That bit of tradecraft was only subject to public controversy last night because of a harmless fork, resulting from the collision between Ant/f2pool's optimized/risky implementation and BIP66's ostensible vs actual support %. No matter how many times you breathlessly invoke violent "choking" imagery, the sky isn't going to fall if blocks are (predictably, given the infinity of demand and limitation of supply) full, causing tx feeds to approach a penny or two. @pierre_rochardThose who would give up essential Decentralization, to purchase a little temporary Adoption, deserve neither Decentralization nor Adoption.
|
██████████ ██████████████████ ██████████████████████ ██████████████████████████ ████████████████████████████ ██████████████████████████████ ████████████████████████████████ ████████████████████████████████ ██████████████████████████████████ ██████████████████████████████████ ██████████████████████████████████ ██████████████████████████████████ ██████████████████████████████████ ████████████████████████████████ ██████████████ ██████████████ ████████████████████████████ ██████████████████████████ ██████████████████████ ██████████████████ ██████████ Monero
|
| "The difference between bad and well-developed digital cash will determine whether we have a dictatorship or a real democracy." David Chaum 1996 "Fungibility provides privacy as a side effect." Adam Back 2014
|
| | |
|
|
|
cypherdoc (OP)
Legendary
Offline
Activity: 1764
Merit: 1002
|
|
July 04, 2015, 07:14:18 PM |
|
of course, it's been done sparingly in the past; but now it has become a regular thing b/c for the first time, we have full blocks.
you need an eye exam. b/c you are blind. this type of confidence reducing event will continue to erode user and merchant growth rate. not to mention the price.
and this just proves my contention that the chokers will never be able to identify when we are being choked.
Large miners have been SPV mining for years. IDK where you get this "sparingly" vs "regular" nonsense from. That bit of tradecraft was only subject to public controversy last night because of a harmless fork, resulting from the collision between Ant/f2pool's optimized/risky implementation and BIP66's ostensible vs actual support %. No matter how many times you breathlessly invoke violent "choking" imagery, the sky isn't going to fall if blocks are (predictably, given the infinity of demand and limitation of supply) full, causing tx feeds to approach a penny or two. @pierre_rochardThose who would give up essential Decentralization, to purchase a little temporary Adoption, deserve neither Decentralization nor Adoption. whoosh. first, provide evidence that header sized blocks have ever been a consistently mined in the past. the Mystery Miner from a few years ago came and died quickly. and, it doesn't even matter. the Chinese miners have already told everyone why they're using SPV defensive blocks. they think 1MB is large. you're arguing in the face of a fact. and the problem is the 1MB.
|
|
|
|
BlindMayorBitcorn
Legendary
Offline
Activity: 1260
Merit: 1116
|
|
July 04, 2015, 07:35:21 PM |
|
The banking crisis in Greece and the proposed 30% bail-in on balances of 8,000 euros got me thinking… Fucking bankster jews, your father is Satan, for you are liars, and he is the father of lies, and when you propose the depositors money to be cut and extracted to fill your pockets, you are truly speaking of your father. - Jesus (paraphrased)Calm down man. There are bad people among the Jews who are bankers but also good ones (such as Ben Bernanke who basically saved US economy in financial crisis from civil unrests by starting aggressive quantitative easinings). There are a lot of non-Jews as banksters, too. I guess in Finland the majority of banksters are goyim and our economy still sucks as the governement is licking the asses of EU with the situation with Russia. The money comes from the Creator. The Jews are the chosen people of God. This you can read from the book you call "Old Testament". If God chooses a people He is not acting like a mortal human being by changing His mind every time something not nice happens. What comes to your comment on killing jeshu (your later post), even the New testament tells it was not the Jews who killed but Roman soldiers (despite according to the Torah Jeshu deserved to die since he violated the commandment of Shabbath which is compulsory for the Jews). Don't quote trolls tho. He has his very own thread for racist ranting.
|
Forgive my petulance and oft-times, I fear, ill-founded criticisms, and forgive me that I have, by this time, made your eyes and head ache with my long letter. But I cannot forgo hastily the pleasure and pride of thus conversing with you.
|
|
|
Peter R
Legendary
Offline
Activity: 1162
Merit: 1007
|
|
July 04, 2015, 07:45:05 PM Last edit: July 04, 2015, 08:02:49 PM by Peter R |
|
One thing I can say for sure that last night's fork taught me, is that miner's will earn a greater profit on average if they are able to verify the blocks faster.
Half-baked proof:
Let τ be the average time it takes to verify a new block, let T be the average block time (10 min), let H be the fraction of the network hashrate controlled by the miner, and let Pvalid be the probability than an unverified block is, indeed, valid. Clearly, 0 < Pvalid < 1.
The fraction of time the miner does not know whether the most recent block was valid is clearly τ / T, which means the fraction of the time the miner does know is 1 - τ / T = (T - τ) / T.
For a given block height, assume three outcomes can occur for the miner: (1) he finds a block before he's verified the previous block, (2) he finds a block after he's verified the previous block, (3) he does not find a block. Assume also that if he finds a valid block he receives the block reward*, unless he was mining on an invalid block (in which case he receives nothing).
The expectation value of the miner's revenue is the expected revenue during the time he doesn't know, plus the expected revenue during the time he does know:
<V> = (25 BTC) Pvalid H (τ / T) + (25 BTC) H (T - τ) / T = (25 BTC) H / T [ T - τ (1 - Pvalid) ]
What this shows is that since the subtracted term, τ (1- Pvalid), is strictly positive, the miner's expectation of revenue, <V>, is maximized if the time to verify the previous block is minimized (i.e., if τ is as small as possible). The limit, as τ -> 0, is
limτ->0 <V> = (25 BTC) H
QED
How does this relate to the blocksize debate?
As the average blocksize gets larger, the time to verify the previous block also gets larger (assuming no other technical innovations). This means that miner profitability begins to depend more heavily (than it does now) on how quickly they can verify blocks. And since mining has thin profit margins, this means that miners will be motivated to improve how quickly their nodes can perform the ECDSA operations needed to verify blocks.
Greg Maxwell once said that he wanted to launch an alt-coin where the proof-of-work was based on ECDSA verify operations. The reason, he said, was the incentivize optimization of ECDSA libraries or even the development of custom ASICs to perform the work. The analysis above suggests we can achieve something similar by increasing the blocksize and allowing normal market competition to do its thing.
*For the sake of this argument, assume the probability of orphaning a valid block is zero (I don't think this assumption affects the current results but it makes the math simpler).
|
|
|
|
sidhujag
Legendary
Offline
Activity: 2044
Merit: 1005
|
|
July 04, 2015, 07:56:52 PM |
|
I have a feeling that fiber (eu) and equities are going to rally hard on a grexit... Makes sense cause we still haven't hit our long term target and makes sense from a market perspective too.
|
|
|
|
cypherdoc (OP)
Legendary
Offline
Activity: 1764
Merit: 1002
|
|
July 04, 2015, 08:02:47 PM Last edit: July 04, 2015, 09:00:40 PM by cypherdoc |
|
the increased frequency of SPV mining has occurred precisely b/c of the more consistently filled 1MB blocks and as a deviant defensive strategy being employed to navigate that congestion. otherwise, you'd have to be arguing that what once wasn't a problem with blocks <1MB have to now be occurring precisely b/c 1MB is a magic number at which blocks are deemed "too big". what is the chances of that? https://www.reddit.com/r/Bitcoin/comments/3c3emd/please_boycott_f2pool/css1oy1
|
|
|
|
sickpig
Legendary
Offline
Activity: 1260
Merit: 1008
|
|
July 04, 2015, 09:31:47 PM |
|
The fork is now resolved. Six blocks were orphaned resulting in a loss of ~100 BTC to F2Pool. Rather good incentive for F2Pool to update their software for proper BIP66 support so this doesn't happen again! since BIP66 has been activated only after 95% of the last 2k blocks was v3 blocks, it means f2pool already has produced BIP66 blocks before the incident, in fact last time I checked f2pool had ~20% of the total hashing power. so by definition f2pool was BIP66 compliant before the fork. the problem is that a lot of miners are "SPV mining", among them: f2pool and antpool This is not quite true. It's important we understand exactly what happened to prevent people from spinning this incident to strengthen their blocksize limit position. Here's what happened: - BTC Nuggets mined block #363,731. It was not BIP66 compliant. - F2Pool began mining on this block before they had validated its contents (SPV mining). - F2Pool should have orphaned block #363,731 as soon as they realized it was invalid. - However, F2Pool kept mining on this invalid block. - F2Pool hit a lucky streak and mined two more blocks (#363,732 and #363,733), pulling this invalid chain further ahead. - AntPool seemed to act similarly, and mined block #363,734 ontop of the invalid chain. - F2Pool quickly mined two more blocks on this chain (6 confs). - Finally, the rest of the network "pulled ahead" (the valid chain became longer) and F2Pool and AntPool finally orphaned their invalid chain. The incident was not caused by SPV mining for the few moments it takes to validate the latest block. It was caused by F2Pool (and AntPool) never getting around to actually validating the blocks. sure you're right. they kept SPV mining for six blocks in a row, and if you ask me they would had continued if the network didn't pulled ahead. this means that their patched btc core was/is able to produce valid blocks, but didn' t validated any prev block whatsoever. this SPV mining habit has to fixed, otherwise we risk to have this kind of incident every time a softfork is activated by mining vote. another think to focus on is the number of full nodes that had discarded those 6 invalid blblocks i.e. the percentage of btc core with version >= 0.10.0
|
Bitcoin is a participatory system which ought to respect the right of self determinism of all of its users - Gregory Maxwell.
|
|
|
Peter R
Legendary
Offline
Activity: 1162
Merit: 1007
|
|
July 04, 2015, 10:04:17 PM Last edit: July 07, 2015, 02:11:46 PM by Peter R |
|
the increased frequency of SPV mining has occurred precisely b/c of the more consistently filled 1MB blocks and deviant defensive strategies being employed to navigate that congestion...otherwise, you'd have to be arguing that what once wasn't a problem with blocks <1MB have to now be occurring precisely b/c 1MB is a magic number at which blocks are deemed "too big". what is the chances of that?
Cypher, you're brilliant! Evidence of an effective blocksize limit: no protocol-enforced limit requiredIn this post we show that, given a few simplifying assumptions *, the network will automatically enforce an effective blocksize limit. This effective limit scales in an automatic fashion with improvements in technology, without requiring an explicit limit being enforced at the protocol level. Background: We learned from last night's fork that miners are incentivized to mine "empty" blocks while they work to process the previously solved block. This increases their revenue, as shown here. However, this also means that the maximum possible value of the average blocksize is reduced in proportion to the frequency of these "empty blocks." For example, if 10% of the blocks were guaranteed to be empty, the maximum value of the average blocksize would presently be 900 kB, rather than 1 MB. We show that as the average size of the blocks increases, the percentage of empty blocks increases in direct proportion, thereby providing a counterbalancing force that serves to limit the blockchain's growth rate. We will refer to the "maximum value of the average blocksize" as the effective blocksize limit. Let τ be the time it takes to process a typical block and let T be the average block time (10 min). [CLARIFICATION: τ includes all delays between the moment the miner has enough information to begin mining (an empty block) on the block header, to the moment he's processed the previous block, created a new non-empty block template, and has his hashing power working on that new non-empty block]. The fraction of time the miner is hashing on an empty block is clearly τ / T; the fraction of the time the miner is hashing on a non-empty block is 1 - τ / T = (T - τ) / T. We will assume that every miner applies the same policy of producing empty SPV blocks for time 0<t<τ, and blocks of size S' for t > τ. Under these conditions, the expectation value of the blocksize is equal to the expectation value of the blocksize on the interval 0<t<τ, plus the expectation value of the blocksize during the interval τ<t<T. S effective = ~0 [(τ / T)] + S' [(T - τ) / T] = S' [(T - τ) / T] (Eq. 1) The time, τ, it takes to process a block is not constant, but rather is assumed to depend linearly ** on the size of the block. Approximating the size of the previous block as S', we get: τ = k S' where k is the number of minutes if takes to process on average, 1 MB of transactional data. Substituting this into Eq. (1) yields the following equation: S effective = S' (T - k S') / T = S' - (k/T) S' 2This is the equation for a concave down parabola, as shown: To find the maximum of this curve, we take its partial derivative with respect to S' and set it to zero: d S effective / d S' = 1 - 2 k S' / T = 0 Solving the above equation for S' gives S' = T / (2 k) This (S') is the blocksize that maximizes the transactional capacity of the network. Substituting this result back into our equation for the effective blocksize limit gives: S effective = T / (4 k) Some numbers: Assume it takes on average 15 seconds *** to process a typical 1 MB block (k =0.25 min / MB). Since T = 10 min, this means the maximum average blocksize (network capacity) is limited to: S effective = T / (4 k) = (10 min) / (4 x 0.25 min / MB) = 10 MB. QED. We've shown that there exists a limit on the maximum value of the average blocksize, due to the time it takes to process and verify a block, irrespective of any protocol enforced limits. *There's a few other assumptions that I don't detail above, but it's Saturday and I'm enjoying the sunshine. **Here we're also assuming that the amount of ECDSA verify operations in a block is roughly proportional to the block's size. ***This is a guess. We should estimate it by looking at the ratio of empty blocks to non-empty blocks produced by F2Pool.**********UPDATE********** …the last 27,027 blocks (basically since jan 1st 2015), f2pool-attributed blocks: 5241, of which coinbase-only: 139 For antpool, this is 4506 / 246. See also: Empty blocks [bitcointalk.org] Awesome! Thanks!!
We can estimate the average effective time it takes to process the blocks, then, as
τ ~= T (Nempty / Nnotempty) ~= T (Nempty / (Ntotal - Nempty))
F2Pool:
~= (10 min) x [139 / (5241 - 139)] = 16.3 seconds
AntPool:
~= (10 min) x [246 / (4506 - 246)] = 34.6 seconds
|
|
|
|
vane91
Member
Offline
Activity: 133
Merit: 26
|
|
July 04, 2015, 10:25:52 PM |
|
yes unless someone starts developing a ASIC for block verification... which will be actually good news!
|
|
|
|
Peter R
Legendary
Offline
Activity: 1162
Merit: 1007
|
|
July 04, 2015, 10:43:30 PM Last edit: July 04, 2015, 11:11:20 PM by Peter R |
|
|
|
|
|
|
solex
Legendary
Offline
Activity: 1078
Merit: 1006
100 satoshis -> ISO code
|
|
July 05, 2015, 12:02:59 AM |
|
Some numbers:
Assume it takes on average 30 seconds to verify 1 MB of typical transactional data (k =0.5 min / MB). Since T = 10 min, this means the maximum average blocksize (network capacity) is limited to:
Seffective = T / (4 k) = (10 min) / (4 x 0.5 min / MB) = 5 MB.
QED. We've shown that there exists a limit on the maximum value of the average blocksize, due to the time it takes to verify a block, irrespective of any protocol enforced limits.
Great work Peter, but do we have any empirical evidence for the 30 seconds? Seems surprisingly high and I would have guessed just a few seconds.
|
|
|
|
Peter R
Legendary
Offline
Activity: 1162
Merit: 1007
|
|
July 05, 2015, 12:07:18 AM |
|
Some numbers:
Assume it takes on average 30 seconds to verify 1 MB of typical transactional data (k =0.5 min / MB). Since T = 10 min, this means the maximum average blocksize (network capacity) is limited to:
Seffective = T / (4 k) = (10 min) / (4 x 0.5 min / MB) = 5 MB.
QED. We've shown that there exists a limit on the maximum value of the average blocksize, due to the time it takes to verify a block, irrespective of any protocol enforced limits.
Great work Peter, but do we have any empirical evidence for the 30 seconds? Seems surprisingly high and I would have guessed just a few seconds. No, I just made it up. I think I'll change it to 15 second, as I agree it's probably too high. I guess we could estimate it by looking at the ratio of empty blocks to non-empty blocks produced by F2Pool. If someone wants to tabulate that data, I'll update my post.
|
|
|
|
TheRealSteve
|
|
July 05, 2015, 12:23:37 AM |
|
I guess we could estimate it by looking at the ratio of empty blocks to non-empty blocks produced by F2Pool.
If someone wants to tabulate that data, I'll update my post.
If you by empty you mean coinbase-only, then in the last 27,027 blocks (basically since jan 1st 2015), f2pool-attributed blocks: 5241, of which coinbase-only: 139 For antpool, this is 4506 / 246. Not sure if that's all the info you'd need, though. See also: Empty blocks [bitcointalk.org]
|
|
|
|
thezerg
Legendary
Offline
Activity: 1246
Merit: 1010
|
|
July 05, 2015, 12:31:33 AM |
|
Your modification to require the inputs to state which block it comes from is a clever way to reduce the addr does not exist proof. But I dont understand your subsequent complexity. If the txn input states that block B is the UTXO then the invalid proof is simply to supply B, right?
|
|
|
|
justusranvier
Legendary
Offline
Activity: 1400
Merit: 1013
|
|
July 05, 2015, 12:41:24 AM |
|
If the txn input states that block B is the UTXO then the invalid proof is simply to supply B, right? That's one way to do it, however even this can be shortened. Right now with all the blocks < 1 MB it's not really a big deal to supply the entire block to prove that the referenced transaction doesn't exist, but it'd be nice to not require the entire block especially for when blocks are larger. By adding a rule to new blocks that require all the transactions to be ordered by their hash, you don't need to supply the entire block to prove that the transaction doesn't exist. It would be good to have that ordering requirement in place before blocks are allowed to grow to make sure that fraud proof size is bounded.
|
|
|
|
Peter R
Legendary
Offline
Activity: 1162
Merit: 1007
|
|
July 05, 2015, 12:41:29 AM Last edit: July 05, 2015, 01:16:29 AM by Peter R |
|
I guess we could estimate it by looking at the ratio of empty blocks to non-empty blocks produced by F2Pool.
If someone wants to tabulate that data, I'll update my post.
If you by empty you mean coinbase-only, then in the last 27,027 blocks (basically since jan 1st 2015), f2pool-attributed blocks: 5241, of which coinbase-only: 139 For antpool, this is 4506 / 246. Not sure if that's all the info you'd need, though. See also: Empty blocks [bitcointalk.org]Awesome! Thanks!! We can estimate the average effective time it takes to process the blocks, then, as τ ~= T (N empty / N notempty) ~= T (N empty / (N total - N empty)) F2Pool: ~= (10 min) x [139 / (5241 - 139)] = 16.3 seconds AntPool: ~= (10 min) x [246 / (4506 - 246)] = 34.6 seconds
|
|
|
|
cypherdoc (OP)
Legendary
Offline
Activity: 1764
Merit: 1002
|
|
July 05, 2015, 01:03:38 AM |
|
I guess we could estimate it by looking at the ratio of empty blocks to non-empty blocks produced by F2Pool.
If someone wants to tabulate that data, I'll update my post.
If you by empty you mean coinbase-only, then in the last 27,027 blocks (basically since jan 1st 2015), f2pool-attributed blocks: 5241, of which coinbase-only: 139 For antpool, this is 4506 / 246. Not sure if that's all the info you'd need, though. See also: Empty blocks [bitcointalk.org]is there a way for you to tell what % of blocks have been full over the last 3 wks and compare that to prior going back to say Jan 1? i'd include those in the 900+ and 720-750 kB range as being full.
|
|
|
|
|