Bitcoin Forum
July 26, 2017, 08:43:33 PM *
News: BIP91 seems stable: there's probably only slightly increased risk of confirmations disappearing. You should still prepare for Aug 1.
 
   Home   Help Search Donate Login Register  
Poll
Question: Will you support Gavin's new block size limit hard fork of 8MB by January 1, 2016 then doubling every 2 years?
1.  yes
2.  no

Pages: « 1 ... 1366 1367 1368 1369 1370 1371 1372 1373 1374 1375 1376 1377 1378 1379 1380 1381 1382 1383 1384 1385 1386 1387 1388 1389 1390 1391 1392 1393 1394 1395 1396 1397 1398 1399 1400 1401 1402 1403 1404 1405 1406 1407 1408 1409 1410 1411 1412 1413 1414 1415 [1416] 1417 1418 1419 1420 1421 1422 1423 1424 1425 1426 1427 1428 1429 1430 1431 1432 1433 1434 1435 1436 1437 1438 1439 1440 1441 1442 1443 1444 1445 1446 1447 1448 1449 1450 1451 1452 1453 1454 1455 1456 1457 1458 1459 1460 1461 1462 1463 1464 1465 1466 ... 1558 »
  Print  
Author Topic: Gold collapsing. Bitcoin UP.  (Read 1938887 times)
tvbcof
Legendary
*
Offline Offline

Activity: 2212


View Profile
July 07, 2015, 12:28:49 AM
 #28301

...
Just from knowing a little about database tuning and ram vs. disk-backed memory, I have always wondered if people have make projections about performance of the validation process under different scenarios and whether they can/will become problematic.  One think I've always wondered if it would be possible to structure transactions such that they would load validation processes to heavily on queue, and particularly if it is common case to push more and more data out of the dbcache.  Any thoughts on this that can be quickly conveyed?

Most of the thought has just been of the forum "The utxo set size needs to be kept down" with an emphasis on the minimum resources to run a full node over the long term.  The database itself has n log n behavior, though if the working set is too large the performance falls off--and the fall of is only enormous for non-SSD drives.  Maybe the working set size is owed more attention, but my thinking there is that user tolerance for resource consumption kicks in long before thats a serious issue.

When you talk about "would it be possible" do you mean an attack?  It's possible to construct a contrived block today that takes many minutes to verify, even within the 1MB limit; though a miner that did that would mostly be hurting themselve unless they had some arrangement with most of the hashpower to accept their block.

Thanks for the input.  Yes, as an attack.  Say, for instance, one primed the blockchain with a lot of customized high-overhead transactions over a period of time.  Then, when one wished to create a disruption, take action on all of them at once thereby upsetting those who were doing real validation.

The nature of the blockchain being what it is, I see an attack being most productive at creating a period of unusability of Bitcoin rather than a full scale failure (excepting a scenario where secret keys could be compromised through a flaw in the generation process which would, of course, be highly devastating.)

I was unaware that even today it would be possible to formulate transactions of the verification complexity that you mention.  It would be interesting to know if anyone is watching the blockchain for transactions which seem to be deliberately designed this way.


Decentralized search
Search for products or services and get paid for it
pre-sale Token CAT
25 July 50% discount
Advertised sites are not endorsed by the Bitcoin Forum. They may be unsafe, untrustworthy, or illegal in your jurisdiction. Advertise here.
1501101813
Hero Member
*
Offline Offline

Posts: 1501101813

View Profile Personal Message (Offline)

Ignore
1501101813
Reply with quote  #2

1501101813
Report to moderator
1501101813
Hero Member
*
Offline Offline

Posts: 1501101813

View Profile Personal Message (Offline)

Ignore
1501101813
Reply with quote  #2

1501101813
Report to moderator
1501101813
Hero Member
*
Offline Offline

Posts: 1501101813

View Profile Personal Message (Offline)

Ignore
1501101813
Reply with quote  #2

1501101813
Report to moderator
solex
Legendary
*
Offline Offline

Activity: 1078


100 satoshis -> ISO code


View Profile
July 07, 2015, 12:49:01 AM
 #28302

There is no requirement that mempools be in sync, -- in fact, they're not and the whole purpose of the blockchain is to synchronize nodes.  The mempools of nodes with identical fee and filtering policies and whom are similarly positioned on the network will be similar, but any change in their policies will make them quite different.

Clean and synched mempools makes for a cleaner blockchain, else garbage in - garbage out. Most mempools are synched because node owners don't usually mess with tx policy. They accept the defaults. Pools like Eligius with very different policies are the outliers. IBLT will help by incentivising node owners to converge to the same policies.

IBLT doesn't currently exist, and other mechenisms like the relay network protocol don't care about mempool synchronization levels.

IBLT does exist as it has been prototyped by Kalle and Rusty. It is just nowhere near ready for a pull request. Since this block propagation efficiency was identified there could have been a lot of work done in Core Dev to advance it further (though I fully accept that other major advances like headers-first were in train and draw down finite resources). I recall that you had a tepid response summarizing the benefit of IBLT as a x2 improvement.  Of course this is hugely dismissive because it ignores a very important factor in scaling systems: required information density per unit time. Blocks having to carry all the data in 1 second which earlier took 600 seconds is a bottleneck in the critical path.
It is the LN which doesn't exist yet and will arrive far too late to help with scaling when blocks are (nearer to) averaging 1MB.

the 1MB was either forward-looking, set too high, or only concenred about the peak (and assuming the average would be much lower) ... or a mixture of these cases.

So, in 2010 Satoshi was forward looking, when the 1MB was several orders of magnitude larger than block sizes.. Yet today we are no longer forward-looking or care about peak volumes, and get ready to fiddle while Rome burns. The 1MB is proving a magnet for spammers as every day the average block size creeps up and makes their job easier. A lot of people have vested interest in seeing Bitcoin crippled. We should not provide them an ever-widening attack vector.

To further make the point about mempools, here is what the mempool looks like on a node with mintxfee=0.0005 / minrelaytxfee=0.0005 set:


$ ~/bitcoin/src/bitcoin-cli  getmempoolinfo
{
    "size" : 301,
    "bytes" : 271464
}



That min fee at 0.0005 is 14 cents, and most users consider this to be way too high, especially if BTC goes back to $1000 and this becomes 50 cents. I kicked off a poll about tx fees and 55% of users don't want to pay more than 1 cent, 80% of users think 5 cents or less is enough of a fee.
https://bitcointalk.org/index.php?topic=827209.0

Maybe this is likely naive and unrealistic long-term, and a viable fees market (once the reward is lower) could push this up a little. Or is this another case where the majority of users are wrong yet again?

Peter made the point that Bitcoin is at a convergence of numerous disciplines, of which no-one is an expert in all. I suggest that while your technical knowledge is absolutely phenomenal, your grasp of the economic incentives in the global marketplace is much weaker.
While Cypherdoc might have had errors in judgment in the Hashfast matter (I know zero about this, and have zero interest in it), his knowledge of the financial marketplace is also phenomenal, and he correctly assesses how Bitcoin can be an economic force for good, empowering people trapped in dysfunctional 3rd world economies. He is right how Bitcoin has to scale and cheaply for users to maintain a virtuous feedback cycle of ecosystem growth, hashing power growth and SoV.
Lots of people will not pay fees of 14c per tx when cheaper alternatives like LTC are out there. I see the recent spike in it (disclaimer: I don't have any) as the market "pricing in" that BTC tx throughput is going to be artificially capped. Whle BTC tx throughput will always be capped by technology, we should not be capping it at some lower level in the misguided belief that this "helps".



TPTB_need_war
Sr. Member
****
Offline Offline

Activity: 420


View Profile
July 07, 2015, 01:09:08 AM
 #28303

furthermore, you ignore the obvious fact that hashers are independently minded and will leave any pool that abuses it's power via all the shenanigans you dream up to scare everyone about how bad Bitcoin is.

From Meni Rosenfeld's paper, the probability that a pool (or any solo miner) will receive any payout for a day of mining is:

1 - e-(% of network hashrate) x 144, where there are 144 blocks per day

Thus a pool which has only 1% of the network hashrate has only 76% chance of winning any blocks for the day. And that probability is reset the next day. Thus a pool with only 1% of the network hashrate could go days without winning a block.

This makes it very difficult to design a payout scheme (c.f. the schemes Meni details) to ephemeral SPV pool miners which (can come and go as often as they like) that is equitable and yet doesn't also place the pool at risk of bankruptcy, while also allowing for the fact that running a pool is an extremely high competition substitutable good, low profit margin business model (unless you economy-of-scale up and especially if use monopolistic tactics).

In short, it is nearly implausible economically to run a pool that has only 1% of the network hashrate.

Thus you can pretty well be damn sure that the pools are Sybil attacked and are lying about their controlling stakes, such that multiple 1% pools must be sharing the same pot of income from blocks in order to make the economics and math work.

QED.

Edit: note that with 2.5 minute blocks (i.e. Litecoin), it improves considerably:

1 - e-(% of network hashrate) x 576, where there are 576 blocks per day

Thus a pool which has only 1% of the network hashrate has a 99.7% chance of winning any blocks for the day.

However one must the factor in that latency (and thus orphan rate) becomes worse and higher hashrate profits more than lower hashrate given any significant amount of latency in the network, as gmaxwell pointed out upthread. So it is not clear that Litecoin gained any advantage in terms of decentralization with the faster block period.

cypherdoc
Legendary
*
Offline Offline

Activity: 1764



View Profile
July 07, 2015, 01:43:31 AM
 #28304

no, memory is not just used for 1MB blocks.  it's also used to store the mempools plus the UTXO set.  large block attacks
Again, you're wrong on the technology. The UTXO set is not held in ram. (There is caching, but its arbritary in size, controlled by the dbcache argument).

as you know, even Gavin talks about this memory problem from UTXO.  and yes, i read the Reddit thread that resulted in which you participated and i'm aware that UTXO can be dynamically cached according to needs.
http://gavinandresen.ninja/utxo-uhoh
Quote

Quote
have the potential to collapse a full node by overloading the memory.  at least, that's what they've been arguing.
"They" in that case is sketchy nutballs advocating these "stress tests", and _you_ arguing that unconfirmed transactions are the real danger.

Super weird that you're arguing that the Bitcoin network is overloaded with average of space usage in blocks, while you're calling your system "under utilized" when you're using a similar proportion of your disk and enough of your ram to push you deeply into swap.

i didn't say this full block spam attack we're undergoing wasn't affecting my node at_all.  sure, i'm in swap, b/c of the huge #unconf tx's but it hasn't shut down or stressed my nodes to any degree.  one of the arguments by Cripplecoiners was that these large block attacks would shut full nodes down from destabilization resulting in centralization.  i'm not seeing that.
Peter R
Legendary
*
Offline Offline

Activity: 1036



View Profile
July 07, 2015, 02:00:49 AM
 #28305

1.  Is F2Pool/AntPool more likely to produce an empty block when the previous block is large?

2.  Is F2Pool/AntPool more likely to produce an empty block when mempool swells?

I think the answer to Q1 will be "yes." But I don't see why the answer to Q2 would be yes for any reason other than the previous block is more likely to be large when mempool swells (i.e., mempool is not the cause, just correlated).

OK, based on the data from JohhnyBravo, it looks like the answer to Q1 is "YES" for F2Pool but "NO" for AntPool.  Here's a rough summary based on the last 100 days of blocks:



If anyone's interested in how I did this, I'll give you a brief walk through.  I first put the data into three bins:
(1) blocks produced by the Miner after a small block (0 kB - 333 kB),
(2) blocks produced by the Miner after a medium block (334 kb - 666 kB), and
(3) blocks produced by the Miner after a large block (667 kB - 1000 kB).  
I also removed any data points where the Miner found a block while mining upon his own previous block.  

For a "null hypothesis," I assumed that getting an empty block was the outcome of a repeated Bernoulli trial. I used a Bernouli trial with P_empty = 3.5% for F2Pool and P_empty = 6.3% for AntPool.  

I then asked, if the null hypothesis is true, then what are the chances of getting, e.g., in the case of F2Pool, 34 (or more) empty blocks out of 619 "large-block" trials?  We'd only expect 619 x 0.035 = 22 empty blocks!

The sum of a repeated Bernoulli trial has a Binomial distribution, so I integrated the corresponding Binomial distribution between 34 and 619 to determine the chances of getting this many (or more) empty blocks by dumb luck.  As we can see, there's only a 0.4% chance of this happening.  This suggests we reject the null hypothesis in favour of the idea that "Yes, F2Pool is actually more likely to produce an empty block when the previous block was large."

This also means that the effect I modelled on the weekend is real, at least for miners behaving like F2Pool.  I'm curious though, why AntPool's data is so different.

Run Bitcoin Unlimited (www.bitcoinunlimited.info)
cypherdoc
Legendary
*
Offline Offline

Activity: 1764



View Profile
July 07, 2015, 02:02:12 AM
 #28306

just look at this.  pitiful.  just shameful that core dev allows attacking spammers, emboldened by Stress Tests 1&2, to disrupt new and existing users who can be found complaining all over Reddit with stuck tx's.  this is exactly the dynamic that Mike Hearn was talking about.  look at that level of unconf tx's, 51000, never seen before and the highly disruptive 2.90 TPS:

https://www.reddit.com/r/Bitcoin/comments/3cbpwe/new_transaction_record_just_reached_147_txs/csu5leg


smooth
Legendary
*
Offline Offline

Activity: 1484



View Profile
July 07, 2015, 02:09:04 AM
 #28307

furthermore, you ignore the obvious fact that hashers are independently minded and will leave any pool that abuses it's power via all the shenanigans you dream up to scare everyone about how bad Bitcoin is.

From Meni Rosenfeld's paper, the probability that a pool (or any solo miner) will receive any payout for a day of mining is:

1 - e-(% of network hashrate) x 144, where there are 144 blocks per day

Thus a pool which has only 1% of the network hashrate has only 76% chance of winning any blocks for the day. And that probability is reset the next day. Thus a pool with only 1% of the network hashrate could go days without winning a block.

This makes it very difficult to design a payout scheme (c.f. the schemes Meni details) to ephemeral SPV pool miners which (can come and go as often as they like) that is equitable and yet doesn't also place the pool at risk of bankruptcy, while also allowing for the fact that running a pool is an extremely high competition substitutable good, low profit margin business model (unless you economy-of-scale up and especially if use monopolistic tactics).

In short, it is nearly implausible economically to run a pool that has only 1% of the network hashrate.

Thus you can pretty well be damn sure that the pools are Sybil attacked and are lying about their controlling stakes, such that multiple 1% pools must be sharing the same pot of income from blocks in order to make the economics and math work.

QED.

Edit: note that with 2.5 minute blocks (i.e. Litecoin), it improves considerably:

1 - e-(% of network hashrate) x 576, where there are 576 blocks per day

Thus a pool which has only 1% of the network hashrate has a 99.7% chance of winning any blocks for the day.

However one must the factor in that latency (and thus orphan rate) becomes worse and higher hashrate profits more than lower hashrate given any significant amount of latency in the network, as gmaxwell pointed out upthread. So it is not clear that Litecoin gained any advantage in terms of decentralization with the faster block period.

Conclusions are valid in substance but in terms of not getting blocks for days, that's not really a big problem for a pool (or solo mining farm) being run as a business. Most expenses (hosting, etc) are going to run monthly, possibly somewhat more often (paying workers), but not daily. Once you look at a one-month window it is about the same as Litecoin: almost certain to find a block in a month. With some ability to draw on reserves or credit in the event of a few bad months it should still be viable.


cypherdoc
Legendary
*
Offline Offline

Activity: 1764



View Profile
July 07, 2015, 02:18:13 AM
 #28308

1.  Is F2Pool/AntPool more likely to produce an empty block when the previous block is large?

2.  Is F2Pool/AntPool more likely to produce an empty block when mempool swells?

I think the answer to Q1 will be "yes." But I don't see why the answer to Q2 would be yes for any reason other than the previous block is more likely to be large when mempool swells (i.e., mempool is not the cause, just correlated).

OK, based on the data from JohhnyBravo, it looks like the answer to Q1 is "YES" for F2Pool but "NO" for AntPool.  Here's a rough summary based on the last 100 days of blocks:



If anyone's interested in how I did this, I'll give you a brief walk through.  I first put the data into three bins:
(1) blocks produced by the Miner after a small block (0 kB - 333 kB),
(2) blocks produced by the Miner after a medium block (334 kb - 666 kB), and
(3) blocks produced by the Miner after a large block (667 kB - 1000 kB).  
I also removed any data points where the Miner found a block while mining upon his own previous block.  

For a "null hypothesis," I assumed that getting an empty block was the outcome of a repeated Bernoulli trial. I used a Bernouli trial with P_empty = 3.5% for F2Pool and P_empty = 6.3% for AntPool.  

I then asked, if the null hypothesis is true, then what are the chances of getting, e.g., in the case of F2Pool, 34 (or more) empty blocks out of 619 "large-block" trials?  We'd only expect 619 x 0.035 = 22 empty blocks!

The sum of a repeated Bernoulli trial has a Poisson distribution, so I integrated the corresponding Poisson distribution between 34 and 619 to determine the chances of getting this many (or more) empty blocks by dumb luck.  As we can see, there's only a 0.4% chance of this happening.  This suggests we reject the null hypothesis in favour of the idea that "Yes, F2Pool is actually more likely to produce an empty block when the previous block was large."

This also means that the effect I modelled on the weekend is real, at least for miners behaving like F2Pool.  I'm curious though, why AntPool's data is so different.

why does the 0 tx block have to come "immediately" after a large block?
Peter R
Legendary
*
Offline Offline

Activity: 1036



View Profile
July 07, 2015, 02:22:04 AM
 #28309

why does the 0 tx block have to come "immediately" after a large block?

They don't.  Empty blocks can come after any sized block.  But I just showed that F2Pool is more likely to produce an empty block when the previous block was large, than when the previous block was not large.  

This makes sense to me because I expect that for large blocks, there's more time between when F2Pool has just enough information to begin hashing, and when they have processed the block and sent a new non-empty blocktemplate to their hashers to work one.  If this time is longer, then there's a better chance they get lucky and mine an empty block.  See what I mean?

Run Bitcoin Unlimited (www.bitcoinunlimited.info)
cypherdoc
Legendary
*
Offline Offline

Activity: 1764



View Profile
July 07, 2015, 02:23:35 AM
 #28310

why does the 0 tx block have to come "immediately" after a large block?

They don't.  Empty blocks can come after any sized block.  But I just showed that F2Pool is more likely to produce an empty block / "defensive block" when the previous block was large than they are when the previous block was small or medium. 



it might be interesting to see if Antminer becomes statistically significant after 2 blocks instead of 1.
gmaxwell
Staff
Legendary
*
Offline Offline

Activity: 2254



View Profile
July 07, 2015, 02:41:27 AM
 #28311

Clean and synched mempools makes for a cleaner blockchain, else garbage in - garbage out. Most mempools are synched because node owners don't usually mess with tx policy. They accept the defaults.
The blockchain itself constain substantial counter-eficidence. Any block over 750k is running with changed settings; as are a substantial chunk of the transactions.  I think this is all well and good, but it's not the case that its all consistent.

Quote
IBLT doesn't currently exist, and other mechenisms like the relay network protocol don't care about mempool synchronization levels.

IBLT does exist as it has been prototyped by Kalle and Rusty. It is just nowhere near ready for a pull request.
It has never relayed a _single_ block, not in a lab, not anywhere. It does _not_ exist. It certantly can and will exist-- though it's not yet clear how useful it will be over the relay network-- Gavin, for example, doesn't believe it will be useful "until blocks are hundreds of megabytes".

But don't you think that I'm saying anything bad about it-- I'm not. Cypherdoc was arguing that mempools were (and had) to be the same, and cited IBLT as a reason---- but it cannot currently be a reason, because it doesn't exist.  Be careful about assigning virtue to the common fate aspect of it-- as it can make censorship much worse. (OTOH, rusty's latest optimizations reduce the need for consistency; and my network block coding idea-- which is what insired IBLT, but is more complex-- basically eliminates consistency pressure entirely)

Quote
I recall that you had a tepid response summarizing the benefit of IBLT as a x2 improvement.  Of course this is hugely dismissive because it ignores a very important factor in scaling systems: required information density per unit time. Blocks having to carry all the data in 1 second which earlier took 600 seconds is a bottleneck in the critical path.
It depends on what you're talking about, if you're talking about throughput it's at best a 2x improvement, if your'e talking about latency it's more.  But keep in mind that the existing, widely deployed block relay network protocol reduces the data sent per already known transaction _two bytes_.

Quote
That min fee at 0.0005 is 14 cents, and most users consider this to be way too high, especially if BTC goes back to $1000 and this becomes 50 cents. I kicked off a poll about tx fees and 55% of users don't want to pay more than 1 cent, 80% of users think 5 cents or less is enough of a fee.
https://bitcointalk.org/index.php?topic=827209.0
GAH! I'm not saying it's a good setting-- I'm just giving a concrete example that nodes (and miners) can control their mempool sizes, as this was at odds with cypherdoc's expectations-- instead he thought miners might be suffering because of large mempools-- and I pointed out that if their mempool was too big they could simply reduce it and he said he didn't believe me. I don't know how I could have made it more clear, but I hope its clear now. Smiley


Bitcoin will not be compromised
cypherdoc
Legendary
*
Offline Offline

Activity: 1764



View Profile
July 07, 2015, 02:41:59 AM
 #28312

why does the 0 tx block have to come "immediately" after a large block?

They don't.  Empty blocks can come after any sized block.  But I just showed that F2Pool is more likely to produce an empty block when the previous block was large, than when the previous block was not large.  

This makes sense to me because I expect that for large blocks, there's more time between when F2Pool has just enough information to begin hashing, and when they have processed the block and sent a new non-empty blocktemplate to their hashers to work one.  If this time is longer, then there's a better chance they get lucky and mine an empty block.  See what I mean?


i think so which is also why these 0 tx blocks usually come within a minute of a large block?
Peter R
Legendary
*
Offline Offline

Activity: 1036



View Profile
July 07, 2015, 02:45:25 AM
 #28313

why does the 0 tx block have to come "immediately" after a large block?

They don't.  Empty blocks can come after any sized block.  But I just showed that F2Pool is more likely to produce an empty block when the previous block was large, than when the previous block was not large.  

This makes sense to me because I expect that for large blocks, there's more time between when F2Pool has just enough information to begin hashing, and when they have processed the block and sent a new non-empty blocktemplate to their hashers to work one.  If this time is longer, then there's a better chance they get lucky and mine an empty block.  See what I mean?


i think so which is also why these 0 tx blocks usually come within a minute of a large block?

Yes, exactly.  And we've just shown that, in the case of F2Pool, the effect is real.  We're not imagining it. 

Run Bitcoin Unlimited (www.bitcoinunlimited.info)
cypherdoc
Legendary
*
Offline Offline

Activity: 1764



View Profile
July 07, 2015, 02:56:24 AM
 #28314

GAH! I'm not saying it's a good setting-- I'm just giving a concrete example that nodes (and miners) can control their mempool sizes, as this was at odds with cypherdoc's expectations-- instead he thought miners might be suffering because of large mempools-- and I pointed out that if their mempool was too big they could simply reduce it and he said he didn't believe me. I don't know how I could have made it more clear, but I hope its clear now. Smiley



yes, thanks for reminding me of that minrelaytxfee as an adjustable variable to screen the mempool.  i had read about that the other day on Reddit but forgot.  i'm not a tech guy afterall. Wink

but you haven't verified that f2pool or Antpool has increased their minrelaytxfee have you to minimize their mempool?  remember, this whole mempool discussion was based off you responding to Peter's mathematics post the other day where you argued that the block verification times were only 80ms for a 250 kB block b/c tx's had been pre-verified after being passed around to all nodes across the network and didn't require re-verification by miners on the relay network and was therefore a refutation of his hypothesis of increasing block verification times (16-37sec on avg) leading to SPV mining.
Peter R
Legendary
*
Offline Offline

Activity: 1036



View Profile
July 07, 2015, 02:59:41 AM
 #28315

GAH! I'm not saying it's a good setting-- I'm just giving a concrete example that nodes (and miners) can control their mempool sizes, as this was at odds with cypherdoc's expectations-- instead he thought miners might be suffering because of large mempools-- and I pointed out that if their mempool was too big they could simply reduce it and he said he didn't believe me. I don't know how I could have made it more clear, but I hope its clear now. Smiley



yes, thanks for reminding me of that minrelaytxfee as an adjustable variable to screen the mempool.  i had read about that the other day on Reddit but forgot.  i'm not a tech guy afterall. Wink

but you haven't verified that f2pool or Antpool has increased their minrelaytxfee have you to minimize their mempool?  remember, this whole mempool discussion was based off you responding to Peter's mathematics post the other day where you argued that the block verification times were only 80ms for a 250 kB block b/c tx's had been pre-verified after being passed around to all nodes across the network and didn't require re-verification by miners on the relay network and was therefore a refutation of his hypothesis of increasing block verification times (16-37sec on avg) leading to SPV mining.

Let me chime in hear quickly, because I think Greg and I are talking about slightly different things.  My model was considering the time between the first moment that a pool could begin hashing on a blockheader, and when the previous block had been processed, a new non-empty block template constructed, and the hashers re-assigned to work on this non-empty block.  

It looks like this time, empirically, is 16 sec (F2Pool) and 35 sec (AntPool), based on these estimates.  

Run Bitcoin Unlimited (www.bitcoinunlimited.info)
gmaxwell
Staff
Legendary
*
Offline Offline

Activity: 2254



View Profile
July 07, 2015, 03:09:22 AM
 #28316

as you know, even Gavin talks about this memory problem from UTXO.  and yes, i read the Reddit thread that resulted in which you participated and i'm aware that UTXO can be dynamically cached according to needs.
http://gavinandresen.ninja/utxo-uhoh

Gavin was insufficently precise. There is a reddit thread is full of people calling gavin a fool ( Sad ) for saying "memory" when he should have been saying fast storage.  https://twitter.com/petertoddbtc/status/596710423094788097

Why do you think it's prudent to argue this with me?

Okay, lets take a bet. Since you're so confident; surely you'll grant me 1000:1 odds?-- I'll give my side away to a public cause.

The question is "Is the entire UTXO set kept in ram in Bitcoin Core ever released?"

I will bet 3 BTC and, with the 1000:1 odds, if you lose you'll pay 3000 BTC (which I will to the hashfast liquidators, to return it to the forum members that it was taken from; which will also save you some money in ongoing lawsuit against you).

Sounds good?  How will we adjudicate?  If not, what is your counter-offer for the terms?

Quote
i didn't say this full block spam attack we're undergoing wasn't affecting my node at_all.  sure, i'm in swap, b/c of the huge #unconf tx's but it hasn't shut down or stressed my nodes to any degree.  one of the arguments by Cripplecoiners was that these large block attacks would shut full nodes down from destabilization resulting in centralization.  i'm not seeing that.
The highest number of unconfirmed transactions I've seen ever is about 8MB. Even if we assume the real max was 3x that this is not explaining your hundreds of megabytes of swap.   We just had half the hashpower of the network mining without validating creating multiple large forks and large reorginizations, but you don't see any destabilization. Okay.

Let me chime in hear quickly, because I think Greg and I are talking about slightly different things.  My model was considering the time between the first moment that a pool could begin hashing on a blockheader, and when the previous block had been processed, a new non-empty block template constructed, and the hashers re-assigned to work on this non-empty block.  

It looks like this time, empirically, is 15 sec (F2Pool) and 30 sec (AntPool), based on these estimates.  

Here I suspect you're suffering from an excess of empiracisism without adequately devling into the mechenism.   You can directly measure that time time from input to minable on an actual node under your control and will observe the time is hundreds of times faster than your estimate. Why?   Miners don't magically know when their pool has new work, they'll get work in the first milliseconds and then grind on it some time before submitting returning work.  Even if the pool long polls them, it takes time to replace work. So what I suspect you're actually measuring there is the latency of the mining process...  which is consistent with what we've expirenced with P2Pool (5-20 second latencies from ASIC miners are common).

I noted you posted a result of a classification, did you run the same data through a simple logistic regression with prior size as the treatment? The intercept in the model would be interesting.

But indeed, these conversations have been conflating several seperate issues (latency vs throughput, etc.). Tricky to avoid that since they're all relevant.

but you haven't verified that f2pool or Antpool has increased their minrelaytxfee have you to minimize their mempool?
I have, they'd previously cranked it down, and were producing small blocks and were flamed in public.  They've since turned it back up.

Quote
remember, this whole mempool discussion was based off you responding to Peter's mathematics post the other day where you argued that the block verification times were only 80ms for a 250 kB block b/c tx's had been pre-verified after being passed around to all nodes across the network and didn't require re-verification by miners on the relay network and was therefore a refutation of his hypothesis of increasing block verification times (16-37sec on avg) leading to SPV mining.
As PeterR points out, they only need to wait for verification to actually verify (which they're not doing today), though they may have to wait longer to include transactions---- though I point out thats not fundimental e.g. no matter how big the backlog is you can produce a template sufficient to completely fill a block while doing no more work than handling a mempool of twice the maximum block size.  (by using a tiered mempool, though no one has bothered to implement this yet-- no one has even been complaining about how long createnewblock takes, due to the ability to produce empty blocks without skipping transactions).

Bitcoin will not be compromised
cypherdoc
Legendary
*
Offline Offline

Activity: 1764



View Profile
July 07, 2015, 03:13:17 AM
 #28317

look at what we're facing with this latest spam attack.  note the little blip back on May 29 which was Stress Test 1.  Stress Test 2 is the blip in the middle with the huge spikes of the last couple of days on the far right.  this looks to me to be the work of a non-economic spammer looking to disrupt new and existing users via stuck tx's which coincides with the Grexit and trouble in general in the traditional fiat markets.  they want to discourage adoption of Bitcoin.  the fastest way to eliminate this attack on users is to lift the block size limit to alleviate the congestion and increase the expense of the spam:

Peter R
Legendary
*
Offline Offline

Activity: 1036



View Profile
July 07, 2015, 03:18:18 AM
 #28318

...
So what I suspect you're actually measuring there is the latency of the mining process...  which is consistent with what we've expirenced with P2Pool (5-20 second latencies from ASIC miners are common).

Interesting!  

And this is why I like the empirical "block box" approach.  I don't care initially what the mechanism is.  I try to find a simple model that explains the effect, and then, later, ask what that mechanism might be.

But now why would the "latency of the mining process" depend on the size of the previous block?  That doesn't make sense to me, but we just showed empirically that F2Pool (but not AntPool) is indeed more likely to produce an empty block when the previous block was large (suggesting that processing large blocks [including the mining process latency] takes longer for some reason).

Run Bitcoin Unlimited (www.bitcoinunlimited.info)
gmaxwell
Staff
Legendary
*
Offline Offline

Activity: 2254



View Profile
July 07, 2015, 03:23:56 AM
 #28319

Interesting!  
And this is why I like the empirical "block box" approach.  I don't care initially what the mechanism is.  I try to find a simple model that explains the effect, and then, later, ask what that mechanism might be.
But now why would the "latency of the mining process" depend on the size of the previous block?  That doesn't make sense to me, but we just showed empirically that F2Pool is indeed more likely to produce an empty block when the previous block is large.
It wouldn't expect the miner latency part to be size dependant: the miner can't even tell how big the prior block was.  I expect your function relating them to have a big constant term in it! (thats why I asked if you tried other regression approaches. )

I suppose there may be some dependance that is introduced by virtue of what percentage of the miners got the dummy work.  Would be pretty interesting to try to seperate that.

Another trap of empirical analysis in this kind of discussion is that we can only measure how the system is-- but then we use that to project the future;  e.g.  say we didn't have ECDSA caching today, you might then measure that it was taking >2 minutes to verify a maximum size block... and yet 100 lines of code and that cost vanishes; which is bad news if you were counting on it to maintain incentives. Smiley

Bitcoin will not be compromised
cypherdoc
Legendary
*
Offline Offline

Activity: 1764



View Profile
July 07, 2015, 03:24:19 AM
 #28320

as you know, even Gavin talks about this memory problem from UTXO.  and yes, i read the Reddit thread that resulted in which you participated and i'm aware that UTXO can be dynamically cached according to needs.
http://gavinandresen.ninja/utxo-uhoh

Gavin was insufficently precise. There is a reddit thread is full of people calling gavin a fool ( Sad ) for saying "memory" when he should have been saying fast storage.  https://twitter.com/petertoddbtc/status/596710423094788097

Why do you think it's prudent to argue this with me?

i'm not really arguing about this with you.  you said UTXO is not in memory.  i'm saying it depends on how fast a node wants to verify tx's via the dynamic caching setting they choose which does get stored in memory.

Quote

Quote
i didn't say this full block spam attack we're undergoing wasn't affecting my node at_all.  sure, i'm in swap, b/c of the huge #unconf tx's but it hasn't shut down or stressed my nodes to any degree.  one of the arguments by Cripplecoiners was that these large block attacks would shut full nodes down from destabilization resulting in centralization.  i'm not seeing that.
The highest number of unconfirmed transactions I've seen ever is about 8MB. Even if we assume the real max was 3x that this is not explaining your hundreds of megabytes of swap.   We just had half the hashpower of the network mining without validating creating multiple large forks and large reorginizations, but you don't see any destabilization. Okay.

ok, i'm not getting the bolded part.  this graph shows 37 MB worth of unconf tx's, no?:



Quote
Let me chime in hear quickly, because I think Greg and I are talking about slightly different things.  My model was considering the time between the first moment that a pool could begin hashing on a blockheader, and when the previous block had been processed, a new non-empty block template constructed, and the hashers re-assigned to work on this non-empty block.  

It looks like this time, empirically, is 15 sec (F2Pool) and 30 sec (AntPool), based on these estimates.  

Here I suspect you're suffering from an excess of empiracisism without adequately devling into the mechenism.   You can directly measure that time time from input to minable on an actual node under your control and will observe the time is hundreds of times faster than your estimate. Why?   Miners don't magically know when their pool has new work, they'll get work in the first milliseconds and then grind on it some time before submitting returning work.  Even if the pool long polls them, it takes time to replace work. So what I suspect you're actually measuring there is the latency of the mining process...  which is consistent with what we've expirenced with P2Pool (5-20 second latencies from ASIC miners are common).

I noted you posted a result of a classification, did you run the same data through a simple logistic regression with prior size as the treatment? The intercept in the model would be interesting.

But indeed, these conversations have been conflating several seperate issues (latency vs throughput, etc.). Tricky to avoid that since they're all relevant.

but you haven't verified that f2pool or Antpool has increased their minrelaytxfee have you to minimize their mempool?
I have, they'd previously cranked it down, and were producing small blocks and were flamed in public.  They've since turned it back up.

Quote
remember, this whole mempool discussion was based off you responding to Peter's mathematics post the other day where you argued that the block verification times were only 80ms for a 250 kB block b/c tx's had been pre-verified after being passed around to all nodes across the network and didn't require re-verification by miners on the relay network and was therefore a refutation of his hypothesis of increasing block verification times (16-37sec on avg) leading to SPV mining.
As PeterR points out, they only need to wait for verification to actually verify (which they're not doing today), though they may have to wait longer to include transactions---- though I point out thats not fundimental e.g. no matter how big the backlog is you can produce a template sufficient to completely fill a block while doing no more work than handling a mempool of twice the maximum block size.  (by using a tiered mempool, though no one has bothered to implement this yet-- no one has even been complaining about how long createnewblock takes, due to the ability to produce empty blocks without skipping transactions).

Pages: « 1 ... 1366 1367 1368 1369 1370 1371 1372 1373 1374 1375 1376 1377 1378 1379 1380 1381 1382 1383 1384 1385 1386 1387 1388 1389 1390 1391 1392 1393 1394 1395 1396 1397 1398 1399 1400 1401 1402 1403 1404 1405 1406 1407 1408 1409 1410 1411 1412 1413 1414 1415 [1416] 1417 1418 1419 1420 1421 1422 1423 1424 1425 1426 1427 1428 1429 1430 1431 1432 1433 1434 1435 1436 1437 1438 1439 1440 1441 1442 1443 1444 1445 1446 1447 1448 1449 1450 1451 1452 1453 1454 1455 1456 1457 1458 1459 1460 1461 1462 1463 1464 1465 1466 ... 1558 »
  Print  
 
Jump to:  

Sponsored by , a Bitcoin-accepting VPN.
Powered by MySQL Powered by PHP Powered by SMF 1.1.19 | SMF © 2006-2009, Simple Machines Valid XHTML 1.0! Valid CSS!