Bitcoin Forum
December 04, 2016, 04:06:29 AM *
News: Latest stable version of Bitcoin Core: 0.13.1  [Torrent].
 
   Home   Help Search Donate Login Register  
Poll
Question: Will you support Gavin's new block size limit hard fork of 8MB by January 1, 2016 then doubling every 2 years?
1.  yes
2.  no

Pages: « 1 ... 1368 1369 1370 1371 1372 1373 1374 1375 1376 1377 1378 1379 1380 1381 1382 1383 1384 1385 1386 1387 1388 1389 1390 1391 1392 1393 1394 1395 1396 1397 1398 1399 1400 1401 1402 1403 1404 1405 1406 1407 1408 1409 1410 1411 1412 1413 1414 1415 1416 1417 [1418] 1419 1420 1421 1422 1423 1424 1425 1426 1427 1428 1429 1430 1431 1432 1433 1434 1435 1436 1437 1438 1439 1440 1441 1442 1443 1444 1445 1446 1447 1448 1449 1450 1451 1452 1453 1454 1455 1456 1457 1458 1459 1460 1461 1462 1463 1464 1465 1466 1467 1468 ... 1560 »
  Print  
Author Topic: Gold collapsing. Bitcoin UP.  (Read 1803957 times)
cypherdoc
Legendary
*
Offline Offline

Activity: 1764



View Profile
July 07, 2015, 03:13:17 AM
 #28341

look at what we're facing with this latest spam attack.  note the little blip back on May 29 which was Stress Test 1.  Stress Test 2 is the blip in the middle with the huge spikes of the last couple of days on the far right.  this looks to me to be the work of a non-economic spammer looking to disrupt new and existing users via stuck tx's which coincides with the Grexit and trouble in general in the traditional fiat markets.  they want to discourage adoption of Bitcoin.  the fastest way to eliminate this attack on users is to lift the block size limit to alleviate the congestion and increase the expense of the spam:

According to NIST and ECRYPT II, the cryptographic algorithms used in Bitcoin are expected to be strong until at least 2030. (After that, it will not be too difficult to transition to different algorithms.)
Advertised sites are not endorsed by the Bitcoin Forum. They may be unsafe, untrustworthy, or illegal in your jurisdiction. Advertise here.
1480824389
Hero Member
*
Offline Offline

Posts: 1480824389

View Profile Personal Message (Offline)

Ignore
1480824389
Reply with quote  #2

1480824389
Report to moderator
1480824389
Hero Member
*
Offline Offline

Posts: 1480824389

View Profile Personal Message (Offline)

Ignore
1480824389
Reply with quote  #2

1480824389
Report to moderator
Peter R
Legendary
*
Offline Offline

Activity: 938



View Profile
July 07, 2015, 03:18:18 AM
 #28342

...
So what I suspect you're actually measuring there is the latency of the mining process...  which is consistent with what we've expirenced with P2Pool (5-20 second latencies from ASIC miners are common).

Interesting!  

And this is why I like the empirical "block box" approach.  I don't care initially what the mechanism is.  I try to find a simple model that explains the effect, and then, later, ask what that mechanism might be.

But now why would the "latency of the mining process" depend on the size of the previous block?  That doesn't make sense to me, but we just showed empirically that F2Pool (but not AntPool) is indeed more likely to produce an empty block when the previous block was large (suggesting that processing large blocks [including the mining process latency] takes longer for some reason).

Run Bitcoin Unlimited (www.bitcoinunlimited.info)
gmaxwell
Staff
Legendary
*
Offline Offline

Activity: 2016



View Profile
July 07, 2015, 03:23:56 AM
 #28343

Interesting!  
And this is why I like the empirical "block box" approach.  I don't care initially what the mechanism is.  I try to find a simple model that explains the effect, and then, later, ask what that mechanism might be.
But now why would the "latency of the mining process" depend on the size of the previous block?  That doesn't make sense to me, but we just showed empirically that F2Pool is indeed more likely to produce an empty block when the previous block is large.
It wouldn't expect the miner latency part to be size dependant: the miner can't even tell how big the prior block was.  I expect your function relating them to have a big constant term in it! (thats why I asked if you tried other regression approaches. )

I suppose there may be some dependance that is introduced by virtue of what percentage of the miners got the dummy work.  Would be pretty interesting to try to seperate that.

Another trap of empirical analysis in this kind of discussion is that we can only measure how the system is-- but then we use that to project the future;  e.g.  say we didn't have ECDSA caching today, you might then measure that it was taking >2 minutes to verify a maximum size block... and yet 100 lines of code and that cost vanishes; which is bad news if you were counting on it to maintain incentives. Smiley
cypherdoc
Legendary
*
Offline Offline

Activity: 1764



View Profile
July 07, 2015, 03:24:19 AM
 #28344

as you know, even Gavin talks about this memory problem from UTXO.  and yes, i read the Reddit thread that resulted in which you participated and i'm aware that UTXO can be dynamically cached according to needs.
http://gavinandresen.ninja/utxo-uhoh

Gavin was insufficently precise. There is a reddit thread is full of people calling gavin a fool ( Sad ) for saying "memory" when he should have been saying fast storage.  https://twitter.com/petertoddbtc/status/596710423094788097

Why do you think it's prudent to argue this with me?

i'm not really arguing about this with you.  you said UTXO is not in memory.  i'm saying it depends on how fast a node wants to verify tx's via the dynamic caching setting they choose which does get stored in memory.

Quote

Quote
i didn't say this full block spam attack we're undergoing wasn't affecting my node at_all.  sure, i'm in swap, b/c of the huge #unconf tx's but it hasn't shut down or stressed my nodes to any degree.  one of the arguments by Cripplecoiners was that these large block attacks would shut full nodes down from destabilization resulting in centralization.  i'm not seeing that.
The highest number of unconfirmed transactions I've seen ever is about 8MB. Even if we assume the real max was 3x that this is not explaining your hundreds of megabytes of swap.   We just had half the hashpower of the network mining without validating creating multiple large forks and large reorginizations, but you don't see any destabilization. Okay.

ok, i'm not getting the bolded part.  this graph shows 37 MB worth of unconf tx's, no?:



Quote
Let me chime in hear quickly, because I think Greg and I are talking about slightly different things.  My model was considering the time between the first moment that a pool could begin hashing on a blockheader, and when the previous block had been processed, a new non-empty block template constructed, and the hashers re-assigned to work on this non-empty block.  

It looks like this time, empirically, is 15 sec (F2Pool) and 30 sec (AntPool), based on these estimates.  

Here I suspect you're suffering from an excess of empiracisism without adequately devling into the mechenism.   You can directly measure that time time from input to minable on an actual node under your control and will observe the time is hundreds of times faster than your estimate. Why?   Miners don't magically know when their pool has new work, they'll get work in the first milliseconds and then grind on it some time before submitting returning work.  Even if the pool long polls them, it takes time to replace work. So what I suspect you're actually measuring there is the latency of the mining process...  which is consistent with what we've expirenced with P2Pool (5-20 second latencies from ASIC miners are common).

I noted you posted a result of a classification, did you run the same data through a simple logistic regression with prior size as the treatment? The intercept in the model would be interesting.

But indeed, these conversations have been conflating several seperate issues (latency vs throughput, etc.). Tricky to avoid that since they're all relevant.

but you haven't verified that f2pool or Antpool has increased their minrelaytxfee have you to minimize their mempool?
I have, they'd previously cranked it down, and were producing small blocks and were flamed in public.  They've since turned it back up.

Quote
remember, this whole mempool discussion was based off you responding to Peter's mathematics post the other day where you argued that the block verification times were only 80ms for a 250 kB block b/c tx's had been pre-verified after being passed around to all nodes across the network and didn't require re-verification by miners on the relay network and was therefore a refutation of his hypothesis of increasing block verification times (16-37sec on avg) leading to SPV mining.
As PeterR points out, they only need to wait for verification to actually verify (which they're not doing today), though they may have to wait longer to include transactions---- though I point out thats not fundimental e.g. no matter how big the backlog is you can produce a template sufficient to completely fill a block while doing no more work than handling a mempool of twice the maximum block size.  (by using a tiered mempool, though no one has bothered to implement this yet-- no one has even been complaining about how long createnewblock takes, due to the ability to produce empty blocks without skipping transactions).

Peter R
Legendary
*
Offline Offline

Activity: 938



View Profile
July 07, 2015, 03:25:28 AM
 #28345

I noted you posted a result of a classification, did you run the same data through a simple logistic regression with prior size as the treatment? The intercept in the model would be interesting.

Not yet, but I had the same idea!  Ugh…but right now I have to get back to non-bitcoin work...

Run Bitcoin Unlimited (www.bitcoinunlimited.info)
gmaxwell
Staff
Legendary
*
Offline Offline

Activity: 2016



View Profile
July 07, 2015, 03:33:50 AM
 #28346

ok, i'm not getting the bolded part.  this graph shows 37 MB worth of unconf tx's, no?:
No clue, no node I have access to is seeing that much-- they may have turned off the minfee rules (not unreasonable for a metrics thing)...

Even given that, again, 37MB doesn't explain your swap.
Adrian-x
Legendary
*
Offline Offline

Activity: 1330



View Profile
July 07, 2015, 03:44:58 AM
 #28347

why does the 0 tx block have to come "immediately" after a large block?

They don't.  Empty blocks can come after any sized block.  But I just showed that F2Pool is more likely to produce an empty block when the previous block was large, than when the previous block was not large.  

This makes sense to me because I expect that for large blocks, there's more time between when F2Pool has just enough information to begin hashing, and when they have processed the block and sent a new non-empty blocktemplate to their hashers to work one.  If this time is longer, then there's a better chance they get lucky and mine an empty block.  See what I mean?


So one could assume those empty block would never be mined or would be orphaned should a competitor be SPV mining.

I'd also assume this phenomena is also only viable while the block subsidy is high enough that transaction fees are inconsequential. In the near future this strategy would most likely be optimized to including the minimum tx's that are low risk high reward to balanced against the risk of loss for the fiew seconds it would take to include in a block.

Thank me in Bits 12MwnzxtprG2mHm3rKdgi7NmJKCypsMMQw
cypherdoc
Legendary
*
Offline Offline

Activity: 1764



View Profile
July 07, 2015, 03:45:24 AM
 #28348

ok, i'm not getting the bolded part.  this graph shows 37 MB worth of unconf tx's, no?:
No clue, no node I have access to is seeing that much-- they may have turned off the minfee rules (not unreasonable for a metrics thing)...

Even given that, again, 37MB doesn't explain your swap.

yeah, i had noticed that.  strange...
cypherdoc
Legendary
*
Offline Offline

Activity: 1764



View Profile
July 07, 2015, 03:55:22 AM
 #28349

ok, i'm not getting the bolded part.  this graph shows 37 MB worth of unconf tx's, no?:
No clue, no node I have access to is seeing that much-- they may have turned off the minfee rules (not unreasonable for a metrics thing)...

Even given that, again, 37MB doesn't explain your swap.

this is what they say on their website.  i should try to find out the exact #:

TradeBlock maintains an extensive bitcoin network data architecture with multiple nodes across geographies. With the ability to view and record every message broadcast to the network, including those that are not extensively relayed, unique insights regarding the network may be derived.
TPTB_need_war
Sr. Member
****
Offline Offline

Activity: 420


View Profile
July 07, 2015, 04:30:58 AM
 #28350

Conclusions are valid in substance but in terms of not getting blocks for days, that's not really a big problem for a pool (or solo mining farm) being run as a business. Most expenses (hosting, etc) are going to run monthly, possibly somewhat more often (paying workers), but not daily. Once you look at a one-month window it is about the same as Litecoin: almost certain to find a block in a month. With some ability to draw on reserves or credit in the event of a few bad months it should still be viable.

I think you missed the economic reasoning, which is that pools have to compete at near 0 margins which means any losses due to overpaying some ephemeral miners during periods (e.g. any variant of pool hopping or just miners who come and go for any reason), is bankruptcy relative to those with larger hashrate share which don't incur that variance in profitability.

Variance such as in unit-of-exchange kills currency and it does the same concentrating effect on pools, as they are currently defined in the current network design for PoW coins (which btw I have a fix for[1]).

And that centralizing effect is exacerbated by the in-band (e.g. lower relative to hashrate latency and orphan rate) and out-of-band incentives (e.g. being paid to double-spend a 0-conf, etc) that may apply to any who may be running hidden monopolies via the Sybil attack, perhaps able to run at negative profit margins thus driving smaller pools extinct. Again I challenge any one to prove that pools are not Sybil attacked (meaning multiple pools being effectively owned by the same controlling entity). Since it can't be disproved, then Cypherdoc's assumption that miners are independent can not be proven to be true, yet it can be falsied by the math and economic reasoning I presented above (although some will refuse to admit the math because they can't falsify it with physical evidence).

All of this gets worse as bandwidth requirements are scaled up.

[1] my improvement over CN need not be anonymity (which could just be copied from CN), but rather network characteristics.

Adrian-x
Legendary
*
Offline Offline

Activity: 1330



View Profile
July 07, 2015, 04:42:38 AM
 #28351

Interesting!  
And this is why I like the empirical "block box" approach.  I don't care initially what the mechanism is.  I try to find a simple model that explains the effect, and then, later, ask what that mechanism might be.
But now why would the "latency of the mining process" depend on the size of the previous block?  That doesn't make sense to me, but we just showed empirically that F2Pool is indeed more likely to produce an empty block when the previous block is large.
It wouldn't expect the miner latency part to be size dependant: the miner can't even tell how big the prior block was.  I expect your function relating them to have a big constant term in it! (thats why I asked if you tried other regression approaches. )

I suppose there may be some dependance that is introduced by virtue of what percentage of the miners got the dummy work.  Would be pretty interesting to try to seperate that.

Another trap of empirical analysis in this kind of discussion is that we can only measure how the system is-- but then we use that to project the future;  e.g.  say we didn't have ECDSA caching today, you might then measure that it was taking >2 minutes to verify a maximum size block... and yet 100 lines of code and that cost vanishes; which is bad news if you were counting on it to maintain incentives. Smiley

Miners can't tell as soon as a block header is published, but they can tell once they get the block, according to Peters analysis bigger blocks take longer to propagate so SPV mining fills the gap until the block is known. So yes you're correct they can't know but not knowing isn't a good explanation as to why we see a higher percent of empty block after a big block.

Thank me in Bits 12MwnzxtprG2mHm3rKdgi7NmJKCypsMMQw
TPTB_need_war
Sr. Member
****
Offline Offline

Activity: 420


View Profile
July 07, 2015, 04:49:12 AM
 #28352

Another trap of empirical analysis in this kind of discussion is that we can only measure how the system is-- but then we use that to project the future;  e.g.  say we didn't have ECDSA caching today, you might then measure that it was taking >2 minutes to verify a maximum size block... and yet 100 lines of code and that cost vanishes; which is bad news if you were counting on it to maintain incentives. Smiley

Good point and more accurate is "we can only measure how we think it is". We don't even have absolute proof that someone isn't running a superior algorithm or hardware solution although we can often make very strong educated inductions.

cypherdoc
Legendary
*
Offline Offline

Activity: 1764



View Profile
July 07, 2015, 04:53:38 AM
 #28353

Interesting!  
And this is why I like the empirical "block box" approach.  I don't care initially what the mechanism is.  I try to find a simple model that explains the effect, and then, later, ask what that mechanism might be.
But now why would the "latency of the mining process" depend on the size of the previous block?  That doesn't make sense to me, but we just showed empirically that F2Pool is indeed more likely to produce an empty block when the previous block is large.
It wouldn't expect the miner latency part to be size dependant: the miner can't even tell how big the prior block was.  I expect your function relating them to have a big constant term in it! (thats why I asked if you tried other regression approaches. )

I suppose there may be some dependance that is introduced by virtue of what percentage of the miners got the dummy work.  Would be pretty interesting to try to seperate that.

Another trap of empirical analysis in this kind of discussion is that we can only measure how the system is-- but then we use that to project the future;  e.g.  say we didn't have ECDSA caching today, you might then measure that it was taking >2 minutes to verify a maximum size block... and yet 100 lines of code and that cost vanishes; which is bad news if you were counting on it to maintain incentives. Smiley

Miners can't tell as soon as a block header is published, but they can tell once they get the block, according to Peters analysis bigger blocks received take longer to propagate process so SPV mining fills the gap until the block is known has been processed. So yes you're correct they can't know but not knowing isn't a good explanation as to why we see a higher percent of empty block after a big block.

ftfy
TPTB_need_war
Sr. Member
****
Offline Offline

Activity: 420


View Profile
July 07, 2015, 04:55:51 AM
 #28354

If you don't want to be called LeBron until the day Lord Satoshi moves His Holy Coins, I suggest apologizing for accusing the core devs (who write the free code you run) of impropriety/malfeasance/obstructionism/etc.

And you have the audacity to assert I am delusional for having faith in Armstrong's investment in his models and the performance I have observed thereof. At least I am not blind to the existence of my faith — as opposed to first hand knowledge of the open source, which you don't have either in the case of your Lord and I also doubt you've internalized all the source code that has been written by ongoing devs.

Haters gonna hate...

cypherdoc
Legendary
*
Offline Offline

Activity: 1764



View Profile
July 07, 2015, 05:05:27 AM
 #28355

ok, i'm not getting the bolded part.  this graph shows 37 MB worth of unconf tx's, no?:
No clue, no node I have access to is seeing that much-- they may have turned off the minfee rules (not unreasonable for a metrics thing)...

Even given that, again, 37MB doesn't explain your swap.

but turning off minfee rules would only be used to include 0 fee tx's.

look at how much BTC value is included in that 37MB.
cypherdoc
Legendary
*
Offline Offline

Activity: 1764



View Profile
July 07, 2015, 05:07:30 AM
 #28356

looks like they may have turned off SPV mining.  that would be good.
TPTB_need_war
Sr. Member
****
Offline Offline

Activity: 420


View Profile
July 07, 2015, 05:16:48 AM
 #28357

I favor Adam Backamoto's extension block proposal.

The 1MB blocksize limit reminds me of the old 640k limit in DOS.

Rather than destroy Window's interoperability with the rich/valuable legacy of 8088 based software, that limit was extended via various hacks sublime software engineering.

Where can I find a specification for this extension blocks proposal, so I can determine how this proposal is differentiated from a pegged side chain?

Mike Hearn is so obviously for centralization it is a requirement to separate his objective points, and appears he may have a point about Lightning networks being not viable for much but very specialized interactions (which was also my upthread point about them).

I continue to think existing PoW designs are stuck between a rock and a hard place, thus I am very interested to read more detail about this proposal.


Edit: I found it, http://sourceforge.net/p/bitcoin/mailman/message/34157058/

The only way I see this is differentiated from pegged side chains, is it could be optionally a one-way transfer of BTC to the extension block chain, thus removes the need for the reliance on federated servers.

I understand that if address formats are differentiated between the two chains, then it is claimed one could pay from one chain to other but I can't see how that can work because the miners on the lower bandwidth chain would be SPV miners on the extension block chain. Thus all the dominos (due to orphaned chains) insecurity ramifications of pegged side chains which I argued upthread is untenable, unless you allow the BTC on each chain to have a different value. Afaics, the only reliable way to move pegged value back and forth between chains is as I wrote upthread with very long challenge periods on transferred funds and to use the Blockstream SPV proofs.

Thus one can view this proposal as a trojan horse to require Blockstream's pedded chains (and either the federated servers or the soft fork for the added OP code to eliminate the federated servers). Clever.

However, I still like the proposal for the same reasons I liked pegged side chains. But pleeeaaaase do it correctly. None of the mumbo jumbo about instant transfers between pegged chains. Sheesh.

And note this does nothing to solve the decentralization problem and rather just transfers the centralization to the extension block chain, because the 1MB chain can ultimately be 51% attacked because its relative block rewards will wither as debasement diminishes and transactions increase on the extension block chain (that is unless extension block transactions will have so extremely small fees and the 1MB chain sufficiently high enough fees which is very very unlikely because centralization by definition drives towards monopolistic pricing). Also because per math and economic reasoning above, solo mining is doomed any way.

domob
Legendary
*
Offline Offline

Activity: 936


View Profile WWW
July 07, 2015, 05:35:05 AM
 #28358

look at what we're facing with this latest spam attack.  note the little blip back on May 29 which was Stress Test 1.  Stress Test 2 is the blip in the middle with the huge spikes of the last couple of days on the far right.  this looks to me to be the work of a non-economic spammer looking to disrupt new and existing users via stuck tx's which coincides with the Grexit and trouble in general in the traditional fiat markets.  they want to discourage adoption of Bitcoin.  the fastest way to eliminate this attack on users is to lift the block size limit to alleviate the congestion and increase the expense of the spam:



So you think that it would be best to simply put all that spam on the blockchain, and have everyone that ever wants to validate it go through that spam for eternity?  Wouldn't it be better to simply let the network and miners adjust their fee policies instead as they see fit and make sure that the spam is not even mined unless they pay "enough" fees?

Use your Namecoin identity as OpenID: https://nameid.org/
Donations: 1domobKsPZ5cWk2kXssD8p8ES1qffGUCm | NMC: NCdomobcmcmVdxC5yxMitojQ4tvAtv99pY
BM-GtQnWM3vcdorfqpKXsmfHQ4rVYPG5pKS | GPG 0xA7330737
Peter R
Legendary
*
Offline Offline

Activity: 938



View Profile
July 07, 2015, 06:06:02 AM
 #28359

Quote from: domob
So you think that it would be best to simply put all that spam on the blockchain, and have everyone that ever wants to validate it go through that spam for eternity?  Wouldn't it be better to simply let the network and miners adjust their fee policies instead as they see fit and make sure that the spam is not even mined unless they pay "enough" fees?

Who decides what enough fees is?

I think this whole blocksize debate comes down to ideology.  It's not a technical decision. Some people think we need to keep 1 degree of freedom of control to shape Bitcoin "correctly," while others think we should let destiny take the wheel.

Run Bitcoin Unlimited (www.bitcoinunlimited.info)
TPTB_need_war
Sr. Member
****
Offline Offline

Activity: 420


View Profile
July 07, 2015, 06:33:04 AM
 #28360

Quote from: domob
So you think that it would be best to simply put all that spam on the blockchain, and have everyone that ever wants to validate it go through that spam for eternity?  Wouldn't it be better to simply let the network and miners adjust their fee policies instead as they see fit and make sure that the spam is not even mined unless they pay "enough" fees?

Who decides what enough fees is?

Actually you ask precisely the correct question, if you want to find the correct answer for decentralization. Hint, hint. But that answer is more clever and complex than I expect you to find.

I think this whole blocksize debate comes down to ideology.  It's not a technical decision. Some people think we need to keep 1 degree of freedom of control to shape Bitcoin "correctly," while others think we should let destiny take the wheel.

You think that because you are blinded.

My well informed, prescient suggestion is wholeheartedly support the pegged side chain direction; otherwise prepare to exchange your BTC for another monetary unit or accept centralization.

Pages: « 1 ... 1368 1369 1370 1371 1372 1373 1374 1375 1376 1377 1378 1379 1380 1381 1382 1383 1384 1385 1386 1387 1388 1389 1390 1391 1392 1393 1394 1395 1396 1397 1398 1399 1400 1401 1402 1403 1404 1405 1406 1407 1408 1409 1410 1411 1412 1413 1414 1415 1416 1417 [1418] 1419 1420 1421 1422 1423 1424 1425 1426 1427 1428 1429 1430 1431 1432 1433 1434 1435 1436 1437 1438 1439 1440 1441 1442 1443 1444 1445 1446 1447 1448 1449 1450 1451 1452 1453 1454 1455 1456 1457 1458 1459 1460 1461 1462 1463 1464 1465 1466 1467 1468 ... 1560 »
  Print  
 
Jump to:  

Sponsored by , a Bitcoin-accepting VPN.
Powered by MySQL Powered by PHP Powered by SMF 1.1.19 | SMF © 2006-2009, Simple Machines Valid XHTML 1.0! Valid CSS!