Bitcoin Forum
November 02, 2024, 07:26:19 AM *
News: Latest Bitcoin Core release: 28.0 [Torrent]
 
   Home   Help Search Login Register More  
Poll
Question: Will you support Gavin's new block size limit hard fork of 8MB by January 1, 2016 then doubling every 2 years?
1.  yes
2.  no

Pages: « 1 ... 1365 1366 1367 1368 1369 1370 1371 1372 1373 1374 1375 1376 1377 1378 1379 1380 1381 1382 1383 1384 1385 1386 1387 1388 1389 1390 1391 1392 1393 1394 1395 1396 1397 1398 1399 1400 1401 1402 1403 1404 1405 1406 1407 1408 1409 1410 1411 1412 1413 1414 [1415] 1416 1417 1418 1419 1420 1421 1422 1423 1424 1425 1426 1427 1428 1429 1430 1431 1432 1433 1434 1435 1436 1437 1438 1439 1440 1441 1442 1443 1444 1445 1446 1447 1448 1449 1450 1451 1452 1453 1454 1455 1456 1457 1458 1459 1460 1461 1462 1463 1464 1465 ... 1557 »
  Print  
Author Topic: Gold collapsing. Bitcoin UP.  (Read 2032233 times)
Peter R
Legendary
*
Offline Offline

Activity: 1162
Merit: 1007



View Profile
July 07, 2015, 02:00:49 AM
Last edit: July 07, 2015, 02:34:29 AM by Peter R
 #28281

1.  Is F2Pool/AntPool more likely to produce an empty block when the previous block is large?

2.  Is F2Pool/AntPool more likely to produce an empty block when mempool swells?

I think the answer to Q1 will be "yes." But I don't see why the answer to Q2 would be yes for any reason other than the previous block is more likely to be large when mempool swells (i.e., mempool is not the cause, just correlated).

OK, based on the data from JohhnyBravo, it looks like the answer to Q1 is "YES" for F2Pool but "NO" for AntPool.  Here's a rough summary based on the last 100 days of blocks:



If anyone's interested in how I did this, I'll give you a brief walk through.  I first put the data into three bins:
(1) blocks produced by the Miner after a small block (0 kB - 333 kB),
(2) blocks produced by the Miner after a medium block (334 kb - 666 kB), and
(3) blocks produced by the Miner after a large block (667 kB - 1000 kB).  
I also removed any data points where the Miner found a block while mining upon his own previous block.  

For a "null hypothesis," I assumed that getting an empty block was the outcome of a repeated Bernoulli trial. I used a Bernouli trial with P_empty = 3.5% for F2Pool and P_empty = 6.3% for AntPool.  

I then asked, if the null hypothesis is true, then what are the chances of getting, e.g., in the case of F2Pool, 34 (or more) empty blocks out of 619 "large-block" trials?  We'd only expect 619 x 0.035 = 22 empty blocks!

The sum of a repeated Bernoulli trial has a Binomial distribution, so I integrated the corresponding Binomial distribution between 34 and 619 to determine the chances of getting this many (or more) empty blocks by dumb luck.  As we can see, there's only a 0.4% chance of this happening.  This suggests we reject the null hypothesis in favour of the idea that "Yes, F2Pool is actually more likely to produce an empty block when the previous block was large."

This also means that the effect I modelled on the weekend is real, at least for miners behaving like F2Pool.  I'm curious though, why AntPool's data is so different.

Run Bitcoin Unlimited (www.bitcoinunlimited.info)
cypherdoc (OP)
Legendary
*
Offline Offline

Activity: 1764
Merit: 1002



View Profile
July 07, 2015, 02:02:12 AM
 #28282

just look at this.  pitiful.  just shameful that core dev allows attacking spammers, emboldened by Stress Tests 1&2, to disrupt new and existing users who can be found complaining all over Reddit with stuck tx's.  this is exactly the dynamic that Mike Hearn was talking about.  look at that level of unconf tx's, 51000, never seen before and the highly disruptive 2.90 TPS:

https://www.reddit.com/r/Bitcoin/comments/3cbpwe/new_transaction_record_just_reached_147_txs/csu5leg


smooth
Legendary
*
Offline Offline

Activity: 2968
Merit: 1198



View Profile
July 07, 2015, 02:09:04 AM
 #28283

furthermore, you ignore the obvious fact that hashers are independently minded and will leave any pool that abuses it's power via all the shenanigans you dream up to scare everyone about how bad Bitcoin is.

From Meni Rosenfeld's paper, the probability that a pool (or any solo miner) will receive any payout for a day of mining is:

1 - e-(% of network hashrate) x 144, where there are 144 blocks per day

Thus a pool which has only 1% of the network hashrate has only 76% chance of winning any blocks for the day. And that probability is reset the next day. Thus a pool with only 1% of the network hashrate could go days without winning a block.

This makes it very difficult to design a payout scheme (c.f. the schemes Meni details) to ephemeral SPV pool miners which (can come and go as often as they like) that is equitable and yet doesn't also place the pool at risk of bankruptcy, while also allowing for the fact that running a pool is an extremely high competition substitutable good, low profit margin business model (unless you economy-of-scale up and especially if use monopolistic tactics).

In short, it is nearly implausible economically to run a pool that has only 1% of the network hashrate.

Thus you can pretty well be damn sure that the pools are Sybil attacked and are lying about their controlling stakes, such that multiple 1% pools must be sharing the same pot of income from blocks in order to make the economics and math work.

QED.

Edit: note that with 2.5 minute blocks (i.e. Litecoin), it improves considerably:

1 - e-(% of network hashrate) x 576, where there are 576 blocks per day

Thus a pool which has only 1% of the network hashrate has a 99.7% chance of winning any blocks for the day.

However one must the factor in that latency (and thus orphan rate) becomes worse and higher hashrate profits more than lower hashrate given any significant amount of latency in the network, as gmaxwell pointed out upthread. So it is not clear that Litecoin gained any advantage in terms of decentralization with the faster block period.

Conclusions are valid in substance but in terms of not getting blocks for days, that's not really a big problem for a pool (or solo mining farm) being run as a business. Most expenses (hosting, etc) are going to run monthly, possibly somewhat more often (paying workers), but not daily. Once you look at a one-month window it is about the same as Litecoin: almost certain to find a block in a month. With some ability to draw on reserves or credit in the event of a few bad months it should still be viable.


cypherdoc (OP)
Legendary
*
Offline Offline

Activity: 1764
Merit: 1002



View Profile
July 07, 2015, 02:18:13 AM
 #28284

1.  Is F2Pool/AntPool more likely to produce an empty block when the previous block is large?

2.  Is F2Pool/AntPool more likely to produce an empty block when mempool swells?

I think the answer to Q1 will be "yes." But I don't see why the answer to Q2 would be yes for any reason other than the previous block is more likely to be large when mempool swells (i.e., mempool is not the cause, just correlated).

OK, based on the data from JohhnyBravo, it looks like the answer to Q1 is "YES" for F2Pool but "NO" for AntPool.  Here's a rough summary based on the last 100 days of blocks:



If anyone's interested in how I did this, I'll give you a brief walk through.  I first put the data into three bins:
(1) blocks produced by the Miner after a small block (0 kB - 333 kB),
(2) blocks produced by the Miner after a medium block (334 kb - 666 kB), and
(3) blocks produced by the Miner after a large block (667 kB - 1000 kB).  
I also removed any data points where the Miner found a block while mining upon his own previous block.  

For a "null hypothesis," I assumed that getting an empty block was the outcome of a repeated Bernoulli trial. I used a Bernouli trial with P_empty = 3.5% for F2Pool and P_empty = 6.3% for AntPool.  

I then asked, if the null hypothesis is true, then what are the chances of getting, e.g., in the case of F2Pool, 34 (or more) empty blocks out of 619 "large-block" trials?  We'd only expect 619 x 0.035 = 22 empty blocks!

The sum of a repeated Bernoulli trial has a Poisson distribution, so I integrated the corresponding Poisson distribution between 34 and 619 to determine the chances of getting this many (or more) empty blocks by dumb luck.  As we can see, there's only a 0.4% chance of this happening.  This suggests we reject the null hypothesis in favour of the idea that "Yes, F2Pool is actually more likely to produce an empty block when the previous block was large."

This also means that the effect I modelled on the weekend is real, at least for miners behaving like F2Pool.  I'm curious though, why AntPool's data is so different.

why does the 0 tx block have to come "immediately" after a large block?
Peter R
Legendary
*
Offline Offline

Activity: 1162
Merit: 1007



View Profile
July 07, 2015, 02:22:04 AM
 #28285

why does the 0 tx block have to come "immediately" after a large block?

They don't.  Empty blocks can come after any sized block.  But I just showed that F2Pool is more likely to produce an empty block when the previous block was large, than when the previous block was not large.  

This makes sense to me because I expect that for large blocks, there's more time between when F2Pool has just enough information to begin hashing, and when they have processed the block and sent a new non-empty blocktemplate to their hashers to work one.  If this time is longer, then there's a better chance they get lucky and mine an empty block.  See what I mean?

Run Bitcoin Unlimited (www.bitcoinunlimited.info)
cypherdoc (OP)
Legendary
*
Offline Offline

Activity: 1764
Merit: 1002



View Profile
July 07, 2015, 02:23:35 AM
 #28286

why does the 0 tx block have to come "immediately" after a large block?

They don't.  Empty blocks can come after any sized block.  But I just showed that F2Pool is more likely to produce an empty block / "defensive block" when the previous block was large than they are when the previous block was small or medium. 



it might be interesting to see if Antminer becomes statistically significant after 2 blocks instead of 1.
gmaxwell
Staff
Legendary
*
Offline Offline

Activity: 4270
Merit: 8805



View Profile WWW
July 07, 2015, 02:41:27 AM
 #28287

Clean and synched mempools makes for a cleaner blockchain, else garbage in - garbage out. Most mempools are synched because node owners don't usually mess with tx policy. They accept the defaults.
The blockchain itself constain substantial counter-eficidence. Any block over 750k is running with changed settings; as are a substantial chunk of the transactions.  I think this is all well and good, but it's not the case that its all consistent.

Quote
IBLT doesn't currently exist, and other mechenisms like the relay network protocol don't care about mempool synchronization levels.

IBLT does exist as it has been prototyped by Kalle and Rusty. It is just nowhere near ready for a pull request.
It has never relayed a _single_ block, not in a lab, not anywhere. It does _not_ exist. It certantly can and will exist-- though it's not yet clear how useful it will be over the relay network-- Gavin, for example, doesn't believe it will be useful "until blocks are hundreds of megabytes".

But don't you think that I'm saying anything bad about it-- I'm not. Cypherdoc was arguing that mempools were (and had) to be the same, and cited IBLT as a reason---- but it cannot currently be a reason, because it doesn't exist.  Be careful about assigning virtue to the common fate aspect of it-- as it can make censorship much worse. (OTOH, rusty's latest optimizations reduce the need for consistency; and my network block coding idea-- which is what insired IBLT, but is more complex-- basically eliminates consistency pressure entirely)

Quote
I recall that you had a tepid response summarizing the benefit of IBLT as a x2 improvement.  Of course this is hugely dismissive because it ignores a very important factor in scaling systems: required information density per unit time. Blocks having to carry all the data in 1 second which earlier took 600 seconds is a bottleneck in the critical path.
It depends on what you're talking about, if you're talking about throughput it's at best a 2x improvement, if your'e talking about latency it's more.  But keep in mind that the existing, widely deployed block relay network protocol reduces the data sent per already known transaction _two bytes_.

Quote
That min fee at 0.0005 is 14 cents, and most users consider this to be way too high, especially if BTC goes back to $1000 and this becomes 50 cents. I kicked off a poll about tx fees and 55% of users don't want to pay more than 1 cent, 80% of users think 5 cents or less is enough of a fee.
https://bitcointalk.org/index.php?topic=827209.0
GAH! I'm not saying it's a good setting-- I'm just giving a concrete example that nodes (and miners) can control their mempool sizes, as this was at odds with cypherdoc's expectations-- instead he thought miners might be suffering because of large mempools-- and I pointed out that if their mempool was too big they could simply reduce it and he said he didn't believe me. I don't know how I could have made it more clear, but I hope its clear now. Smiley

cypherdoc (OP)
Legendary
*
Offline Offline

Activity: 1764
Merit: 1002



View Profile
July 07, 2015, 02:41:59 AM
 #28288

why does the 0 tx block have to come "immediately" after a large block?

They don't.  Empty blocks can come after any sized block.  But I just showed that F2Pool is more likely to produce an empty block when the previous block was large, than when the previous block was not large.  

This makes sense to me because I expect that for large blocks, there's more time between when F2Pool has just enough information to begin hashing, and when they have processed the block and sent a new non-empty blocktemplate to their hashers to work one.  If this time is longer, then there's a better chance they get lucky and mine an empty block.  See what I mean?


i think so which is also why these 0 tx blocks usually come within a minute of a large block?
Peter R
Legendary
*
Offline Offline

Activity: 1162
Merit: 1007



View Profile
July 07, 2015, 02:45:25 AM
 #28289

why does the 0 tx block have to come "immediately" after a large block?

They don't.  Empty blocks can come after any sized block.  But I just showed that F2Pool is more likely to produce an empty block when the previous block was large, than when the previous block was not large.  

This makes sense to me because I expect that for large blocks, there's more time between when F2Pool has just enough information to begin hashing, and when they have processed the block and sent a new non-empty blocktemplate to their hashers to work one.  If this time is longer, then there's a better chance they get lucky and mine an empty block.  See what I mean?


i think so which is also why these 0 tx blocks usually come within a minute of a large block?

Yes, exactly.  And we've just shown that, in the case of F2Pool, the effect is real.  We're not imagining it. 

Run Bitcoin Unlimited (www.bitcoinunlimited.info)
cypherdoc (OP)
Legendary
*
Offline Offline

Activity: 1764
Merit: 1002



View Profile
July 07, 2015, 02:56:24 AM
 #28290

GAH! I'm not saying it's a good setting-- I'm just giving a concrete example that nodes (and miners) can control their mempool sizes, as this was at odds with cypherdoc's expectations-- instead he thought miners might be suffering because of large mempools-- and I pointed out that if their mempool was too big they could simply reduce it and he said he didn't believe me. I don't know how I could have made it more clear, but I hope its clear now. Smiley



yes, thanks for reminding me of that minrelaytxfee as an adjustable variable to screen the mempool.  i had read about that the other day on Reddit but forgot.  i'm not a tech guy afterall. Wink

but you haven't verified that f2pool or Antpool has increased their minrelaytxfee have you to minimize their mempool?  remember, this whole mempool discussion was based off you responding to Peter's mathematics post the other day where you argued that the block verification times were only 80ms for a 250 kB block b/c tx's had been pre-verified after being passed around to all nodes across the network and didn't require re-verification by miners on the relay network and was therefore a refutation of his hypothesis of increasing block verification times (16-37sec on avg) leading to SPV mining.
Peter R
Legendary
*
Offline Offline

Activity: 1162
Merit: 1007



View Profile
July 07, 2015, 02:59:41 AM
 #28291

GAH! I'm not saying it's a good setting-- I'm just giving a concrete example that nodes (and miners) can control their mempool sizes, as this was at odds with cypherdoc's expectations-- instead he thought miners might be suffering because of large mempools-- and I pointed out that if their mempool was too big they could simply reduce it and he said he didn't believe me. I don't know how I could have made it more clear, but I hope its clear now. Smiley



yes, thanks for reminding me of that minrelaytxfee as an adjustable variable to screen the mempool.  i had read about that the other day on Reddit but forgot.  i'm not a tech guy afterall. Wink

but you haven't verified that f2pool or Antpool has increased their minrelaytxfee have you to minimize their mempool?  remember, this whole mempool discussion was based off you responding to Peter's mathematics post the other day where you argued that the block verification times were only 80ms for a 250 kB block b/c tx's had been pre-verified after being passed around to all nodes across the network and didn't require re-verification by miners on the relay network and was therefore a refutation of his hypothesis of increasing block verification times (16-37sec on avg) leading to SPV mining.

Let me chime in hear quickly, because I think Greg and I are talking about slightly different things.  My model was considering the time between the first moment that a pool could begin hashing on a blockheader, and when the previous block had been processed, a new non-empty block template constructed, and the hashers re-assigned to work on this non-empty block.  

It looks like this time, empirically, is 16 sec (F2Pool) and 35 sec (AntPool), based on these estimates.  

Run Bitcoin Unlimited (www.bitcoinunlimited.info)
gmaxwell
Staff
Legendary
*
Offline Offline

Activity: 4270
Merit: 8805



View Profile WWW
July 07, 2015, 03:09:22 AM
Last edit: July 07, 2015, 03:19:58 AM by gmaxwell
 #28292

as you know, even Gavin talks about this memory problem from UTXO.  and yes, i read the Reddit thread that resulted in which you participated and i'm aware that UTXO can be dynamically cached according to needs.
http://gavinandresen.ninja/utxo-uhoh

Gavin was insufficently precise. There is a reddit thread is full of people calling gavin a fool ( Sad ) for saying "memory" when he should have been saying fast storage.  https://twitter.com/petertoddbtc/status/596710423094788097

Why do you think it's prudent to argue this with me?

Okay, lets take a bet. Since you're so confident; surely you'll grant me 1000:1 odds?-- I'll give my side away to a public cause.

The question is "Is the entire UTXO set kept in ram in Bitcoin Core ever released?"

I will bet 3 BTC and, with the 1000:1 odds, if you lose you'll pay 3000 BTC (which I will to the hashfast liquidators, to return it to the forum members that it was taken from; which will also save you some money in ongoing lawsuit against you).

Sounds good?  How will we adjudicate?  If not, what is your counter-offer for the terms?

Quote
i didn't say this full block spam attack we're undergoing wasn't affecting my node at_all.  sure, i'm in swap, b/c of the huge #unconf tx's but it hasn't shut down or stressed my nodes to any degree.  one of the arguments by Cripplecoiners was that these large block attacks would shut full nodes down from destabilization resulting in centralization.  i'm not seeing that.
The highest number of unconfirmed transactions I've seen ever is about 8MB. Even if we assume the real max was 3x that this is not explaining your hundreds of megabytes of swap.   We just had half the hashpower of the network mining without validating creating multiple large forks and large reorginizations, but you don't see any destabilization. Okay.

Let me chime in hear quickly, because I think Greg and I are talking about slightly different things.  My model was considering the time between the first moment that a pool could begin hashing on a blockheader, and when the previous block had been processed, a new non-empty block template constructed, and the hashers re-assigned to work on this non-empty block.  

It looks like this time, empirically, is 15 sec (F2Pool) and 30 sec (AntPool), based on these estimates.  

Here I suspect you're suffering from an excess of empiracisism without adequately devling into the mechenism.   You can directly measure that time time from input to minable on an actual node under your control and will observe the time is hundreds of times faster than your estimate. Why?   Miners don't magically know when their pool has new work, they'll get work in the first milliseconds and then grind on it some time before submitting returning work.  Even if the pool long polls them, it takes time to replace work. So what I suspect you're actually measuring there is the latency of the mining process...  which is consistent with what we've expirenced with P2Pool (5-20 second latencies from ASIC miners are common).

I noted you posted a result of a classification, did you run the same data through a simple logistic regression with prior size as the treatment? The intercept in the model would be interesting.

But indeed, these conversations have been conflating several seperate issues (latency vs throughput, etc.). Tricky to avoid that since they're all relevant.

but you haven't verified that f2pool or Antpool has increased their minrelaytxfee have you to minimize their mempool?
I have, they'd previously cranked it down, and were producing small blocks and were flamed in public.  They've since turned it back up.

Quote
remember, this whole mempool discussion was based off you responding to Peter's mathematics post the other day where you argued that the block verification times were only 80ms for a 250 kB block b/c tx's had been pre-verified after being passed around to all nodes across the network and didn't require re-verification by miners on the relay network and was therefore a refutation of his hypothesis of increasing block verification times (16-37sec on avg) leading to SPV mining.
As PeterR points out, they only need to wait for verification to actually verify (which they're not doing today), though they may have to wait longer to include transactions---- though I point out thats not fundimental e.g. no matter how big the backlog is you can produce a template sufficient to completely fill a block while doing no more work than handling a mempool of twice the maximum block size.  (by using a tiered mempool, though no one has bothered to implement this yet-- no one has even been complaining about how long createnewblock takes, due to the ability to produce empty blocks without skipping transactions).
cypherdoc (OP)
Legendary
*
Offline Offline

Activity: 1764
Merit: 1002



View Profile
July 07, 2015, 03:13:17 AM
 #28293

look at what we're facing with this latest spam attack.  note the little blip back on May 29 which was Stress Test 1.  Stress Test 2 is the blip in the middle with the huge spikes of the last couple of days on the far right.  this looks to me to be the work of a non-economic spammer looking to disrupt new and existing users via stuck tx's which coincides with the Grexit and trouble in general in the traditional fiat markets.  they want to discourage adoption of Bitcoin.  the fastest way to eliminate this attack on users is to lift the block size limit to alleviate the congestion and increase the expense of the spam:

Peter R
Legendary
*
Offline Offline

Activity: 1162
Merit: 1007



View Profile
July 07, 2015, 03:18:18 AM
 #28294

...
So what I suspect you're actually measuring there is the latency of the mining process...  which is consistent with what we've expirenced with P2Pool (5-20 second latencies from ASIC miners are common).

Interesting!  

And this is why I like the empirical "block box" approach.  I don't care initially what the mechanism is.  I try to find a simple model that explains the effect, and then, later, ask what that mechanism might be.

But now why would the "latency of the mining process" depend on the size of the previous block?  That doesn't make sense to me, but we just showed empirically that F2Pool (but not AntPool) is indeed more likely to produce an empty block when the previous block was large (suggesting that processing large blocks [including the mining process latency] takes longer for some reason).

Run Bitcoin Unlimited (www.bitcoinunlimited.info)
gmaxwell
Staff
Legendary
*
Offline Offline

Activity: 4270
Merit: 8805



View Profile WWW
July 07, 2015, 03:23:56 AM
 #28295

Interesting!  
And this is why I like the empirical "block box" approach.  I don't care initially what the mechanism is.  I try to find a simple model that explains the effect, and then, later, ask what that mechanism might be.
But now why would the "latency of the mining process" depend on the size of the previous block?  That doesn't make sense to me, but we just showed empirically that F2Pool is indeed more likely to produce an empty block when the previous block is large.
It wouldn't expect the miner latency part to be size dependant: the miner can't even tell how big the prior block was.  I expect your function relating them to have a big constant term in it! (thats why I asked if you tried other regression approaches. )

I suppose there may be some dependance that is introduced by virtue of what percentage of the miners got the dummy work.  Would be pretty interesting to try to seperate that.

Another trap of empirical analysis in this kind of discussion is that we can only measure how the system is-- but then we use that to project the future;  e.g.  say we didn't have ECDSA caching today, you might then measure that it was taking >2 minutes to verify a maximum size block... and yet 100 lines of code and that cost vanishes; which is bad news if you were counting on it to maintain incentives. Smiley
cypherdoc (OP)
Legendary
*
Offline Offline

Activity: 1764
Merit: 1002



View Profile
July 07, 2015, 03:24:19 AM
 #28296

as you know, even Gavin talks about this memory problem from UTXO.  and yes, i read the Reddit thread that resulted in which you participated and i'm aware that UTXO can be dynamically cached according to needs.
http://gavinandresen.ninja/utxo-uhoh

Gavin was insufficently precise. There is a reddit thread is full of people calling gavin a fool ( Sad ) for saying "memory" when he should have been saying fast storage.  https://twitter.com/petertoddbtc/status/596710423094788097

Why do you think it's prudent to argue this with me?

i'm not really arguing about this with you.  you said UTXO is not in memory.  i'm saying it depends on how fast a node wants to verify tx's via the dynamic caching setting they choose which does get stored in memory.

Quote

Quote
i didn't say this full block spam attack we're undergoing wasn't affecting my node at_all.  sure, i'm in swap, b/c of the huge #unconf tx's but it hasn't shut down or stressed my nodes to any degree.  one of the arguments by Cripplecoiners was that these large block attacks would shut full nodes down from destabilization resulting in centralization.  i'm not seeing that.
The highest number of unconfirmed transactions I've seen ever is about 8MB. Even if we assume the real max was 3x that this is not explaining your hundreds of megabytes of swap.   We just had half the hashpower of the network mining without validating creating multiple large forks and large reorginizations, but you don't see any destabilization. Okay.

ok, i'm not getting the bolded part.  this graph shows 37 MB worth of unconf tx's, no?:



Quote
Let me chime in hear quickly, because I think Greg and I are talking about slightly different things.  My model was considering the time between the first moment that a pool could begin hashing on a blockheader, and when the previous block had been processed, a new non-empty block template constructed, and the hashers re-assigned to work on this non-empty block.  

It looks like this time, empirically, is 15 sec (F2Pool) and 30 sec (AntPool), based on these estimates.  

Here I suspect you're suffering from an excess of empiracisism without adequately devling into the mechenism.   You can directly measure that time time from input to minable on an actual node under your control and will observe the time is hundreds of times faster than your estimate. Why?   Miners don't magically know when their pool has new work, they'll get work in the first milliseconds and then grind on it some time before submitting returning work.  Even if the pool long polls them, it takes time to replace work. So what I suspect you're actually measuring there is the latency of the mining process...  which is consistent with what we've expirenced with P2Pool (5-20 second latencies from ASIC miners are common).

I noted you posted a result of a classification, did you run the same data through a simple logistic regression with prior size as the treatment? The intercept in the model would be interesting.

But indeed, these conversations have been conflating several seperate issues (latency vs throughput, etc.). Tricky to avoid that since they're all relevant.

but you haven't verified that f2pool or Antpool has increased their minrelaytxfee have you to minimize their mempool?
I have, they'd previously cranked it down, and were producing small blocks and were flamed in public.  They've since turned it back up.

Quote
remember, this whole mempool discussion was based off you responding to Peter's mathematics post the other day where you argued that the block verification times were only 80ms for a 250 kB block b/c tx's had been pre-verified after being passed around to all nodes across the network and didn't require re-verification by miners on the relay network and was therefore a refutation of his hypothesis of increasing block verification times (16-37sec on avg) leading to SPV mining.
As PeterR points out, they only need to wait for verification to actually verify (which they're not doing today), though they may have to wait longer to include transactions---- though I point out thats not fundimental e.g. no matter how big the backlog is you can produce a template sufficient to completely fill a block while doing no more work than handling a mempool of twice the maximum block size.  (by using a tiered mempool, though no one has bothered to implement this yet-- no one has even been complaining about how long createnewblock takes, due to the ability to produce empty blocks without skipping transactions).

Peter R
Legendary
*
Offline Offline

Activity: 1162
Merit: 1007



View Profile
July 07, 2015, 03:25:28 AM
 #28297

I noted you posted a result of a classification, did you run the same data through a simple logistic regression with prior size as the treatment? The intercept in the model would be interesting.

Not yet, but I had the same idea!  Ugh…but right now I have to get back to non-bitcoin work...

Run Bitcoin Unlimited (www.bitcoinunlimited.info)
gmaxwell
Staff
Legendary
*
Offline Offline

Activity: 4270
Merit: 8805



View Profile WWW
July 07, 2015, 03:33:50 AM
 #28298

ok, i'm not getting the bolded part.  this graph shows 37 MB worth of unconf tx's, no?:
No clue, no node I have access to is seeing that much-- they may have turned off the minfee rules (not unreasonable for a metrics thing)...

Even given that, again, 37MB doesn't explain your swap.
Adrian-x
Legendary
*
Offline Offline

Activity: 1372
Merit: 1000



View Profile
July 07, 2015, 03:44:58 AM
 #28299

why does the 0 tx block have to come "immediately" after a large block?

They don't.  Empty blocks can come after any sized block.  But I just showed that F2Pool is more likely to produce an empty block when the previous block was large, than when the previous block was not large.  

This makes sense to me because I expect that for large blocks, there's more time between when F2Pool has just enough information to begin hashing, and when they have processed the block and sent a new non-empty blocktemplate to their hashers to work one.  If this time is longer, then there's a better chance they get lucky and mine an empty block.  See what I mean?


So one could assume those empty block would never be mined or would be orphaned should a competitor be SPV mining.

I'd also assume this phenomena is also only viable while the block subsidy is high enough that transaction fees are inconsequential. In the near future this strategy would most likely be optimized to including the minimum tx's that are low risk high reward to balanced against the risk of loss for the fiew seconds it would take to include in a block.

Thank me in Bits 12MwnzxtprG2mHm3rKdgi7NmJKCypsMMQw
cypherdoc (OP)
Legendary
*
Offline Offline

Activity: 1764
Merit: 1002



View Profile
July 07, 2015, 03:45:24 AM
 #28300

ok, i'm not getting the bolded part.  this graph shows 37 MB worth of unconf tx's, no?:
No clue, no node I have access to is seeing that much-- they may have turned off the minfee rules (not unreasonable for a metrics thing)...

Even given that, again, 37MB doesn't explain your swap.

yeah, i had noticed that.  strange...
Pages: « 1 ... 1365 1366 1367 1368 1369 1370 1371 1372 1373 1374 1375 1376 1377 1378 1379 1380 1381 1382 1383 1384 1385 1386 1387 1388 1389 1390 1391 1392 1393 1394 1395 1396 1397 1398 1399 1400 1401 1402 1403 1404 1405 1406 1407 1408 1409 1410 1411 1412 1413 1414 [1415] 1416 1417 1418 1419 1420 1421 1422 1423 1424 1425 1426 1427 1428 1429 1430 1431 1432 1433 1434 1435 1436 1437 1438 1439 1440 1441 1442 1443 1444 1445 1446 1447 1448 1449 1450 1451 1452 1453 1454 1455 1456 1457 1458 1459 1460 1461 1462 1463 1464 1465 ... 1557 »
  Print  
 
Jump to:  

Powered by MySQL Powered by PHP Powered by SMF 1.1.19 | SMF © 2006-2009, Simple Machines Valid XHTML 1.0! Valid CSS!