misterbigg (OP)
Legendary
Offline
Activity: 1064
Merit: 1001
|
|
March 06, 2013, 08:32:58 AM |
|
Just reiterating my prediction so we can see how it plays out. We are currently on #2, a lot of unconfirmed transactions and starting to see #3. We should see transaction fees increase and also more and more blocks larger than 250kb as miners uncap the soft limit. we should see what happens as we run into the soft blocksize limits...what do you predict will happen? In this order: 1. Most blocks are at or near the 250 kilobyte soft limit. 2. The memory pool of transactions grows due to insufficient space in blocks. 3. Users notice trend of transactions taking longer to confirm, or not confirming at all. 4. Fees increase as users pay more to improve confirmation times. 5. Miners (or mining pools) modify code to select transactions with the highest fees per kilobyte to fit into blocks. They remote the 250 kilobyte soft limit. Some miners disallow free transactions entirely. 6. Transactions clear much more quickly now, but fees decrease. 7. Blocks increase in size until they are at or near the one megabyte hard limit. 8. Fees start increasing. Free transactions rarely confirm at all now. 9. Small transactions are eliminated since they are not economically feasible. SatoshiDice increases betting minimums along with fees. The volume of SatoshiDice transactions decrease. 10. Users at the margins of transaction profitability with respect to fees are pushed off the network. 11. Many people, most non-technical, clamor for the block size limit to be lifted. 12. Fees reach an equilibrium where they remain stable. 13. Spurred by the profitability of Bitcoin transactions, alternate chains appear to capture the users that Bitcoin lost. 14. Pleased with their profitability, miners refuse to accept any hard fork to block size.
|
|
|
|
DannyHamilton
Legendary
Offline
Activity: 3486
Merit: 4851
|
|
March 06, 2013, 08:43:51 AM |
|
- snip - The volume of SatoshiDice transactions decrease. - snip -
I suspect that you underestimate the power of a gambling addiction.
|
|
|
|
poly
Member
Offline
Activity: 84
Merit: 10
Weighted companion cube
|
|
March 06, 2013, 11:10:43 AM |
|
Not another disguised post promoting the scam & premined ripple currency after being bribed by OpenCoin Inc again..
|
|
|
|
AbsoluteZero
Member
Offline
Activity: 66
Merit: 10
|
|
March 06, 2013, 12:42:29 PM |
|
14. Pleased with their profitability, miners refuse to accept any hard fork to block size.
You forgot: 15. Looking at profitability of miners every Joe Shmoe buys an asic machine until mining sucks.
|
|
|
|
misterbigg (OP)
Legendary
Offline
Activity: 1064
Merit: 1001
|
|
March 06, 2013, 12:54:11 PM |
|
Not another disguised post promoting the scam & premined ripple currency after being bribed by OpenCoin Inc again..
Not at all, in fact if you read my post you realize that this is bullish for Bitcoin's value and security.
|
|
|
|
dree12
Legendary
Offline
Activity: 1246
Merit: 1078
|
|
March 06, 2013, 12:55:12 PM |
|
14. Pleased with their profitability, miners refuse to accept any hard fork to block size.
The network isn't controlled by the miners. A hard fork the miners "refuse to accept" will have new miners that take their place.
|
|
|
|
justusranvier
Legendary
Offline
Activity: 1400
Merit: 1013
|
|
March 06, 2013, 01:34:01 PM |
|
11. Many people, most non-technical, clamor for the block size limit to be lifted. 12. Fees reach an equilibrium where they remain stable. 13. Spurred by the profitability of Bitcoin transactions, alternate chains appear to capture the users that Bitcoin lost. 14. Pleased with their profitability, miners refuse to accept any hard fork to block size. 11. News articles start appearing in the media pointing our the 7 tps hard transaction limit as a fatal flaw in Bitcoin. 12. At best, fees never exceed 1/10 to 1/5 the block subsidy. 13. Businesses investment drastically slows with regards to all forms of distributed cryptocurrency and more capital is directed towards centralized solutions. 14. Miners realize they killed the goose that laid the golden egg.
|
|
|
|
Technomage
Legendary
Offline
Activity: 2184
Merit: 1056
Affordable Physical Bitcoins - Denarium.com
|
|
March 06, 2013, 04:08:32 PM |
|
11. News articles start appearing in the media pointing our the 7 tps hard transaction limit as a fatal flaw in Bitcoin. 12. At best, fees never exceed 1/10 to 1/5 the block subsidy. 13. Businesses investment drastically slows with regards to all forms of distributed cryptocurrency and more capital is directed towards centralized solutions. 14. Miners realize they killed the goose that laid the golden egg. Exactly. If the miners want to see Bitcoin scale at all they will have to agree to a hard fork. As long as a good solution is agreed upon, by the dev team at least.
|
Denarium closing sale discounts now up to 43%! Check out our products from here!
|
|
|
jgarzik
Legendary
Offline
Activity: 1596
Merit: 1100
|
|
March 06, 2013, 04:35:35 PM |
|
Just reiterating my prediction so we can see how it plays out. We are currently on #2, a lot of unconfirmed transactions and starting to see #3. We should see transaction fees increase and also more and more blocks larger than 250kb as miners uncap the soft limit. The amount of unconfirmed transactions is not larger than average, over a 24 hour period. A snapshot of the mempool -- like the blockchain.info link above -- does not fit the thesis for two reasons: - Never-will-confirm transactions and low priority transactions bloat the mempool
- Some miners sweep far more than 250k worth of transactions, so some miners already sweep large swaths into blocks
This situation has been ongoing for months now.
|
Jeff Garzik, Bloq CEO, former bitcoin core dev team; opinions are my own. Visit bloq.com / metronome.io Donations / tip jar: 1BrufViLKnSWtuWGkryPsKsxonV2NQ7Tcj
|
|
|
misterbigg (OP)
Legendary
Offline
Activity: 1064
Merit: 1001
|
|
March 06, 2013, 04:41:56 PM |
|
Is there a place that describes how the reference client deals with the memory pool? Like, what happens when it fills up (which transactions get purged, if any, and after how long)?
|
|
|
|
jgarzik
Legendary
Offline
Activity: 1596
Merit: 1100
|
|
March 06, 2013, 04:47:37 PM |
|
Is there a place that describes how the reference client deals with the memory pool? Like, what happens when it fills up (which transactions get purged, if any, and after how long)?
The only way transactions are purged is by appearing in a block. At present it cannot "fill up" except by using all available memory, and getting OOM-killed. Therefore, you can see how any long-running node will eventually accumulate a lot of dead weight. The mempool only stores provably spendable transactions, so it is DoS'able, but you must do so with relay-able standard transactions.
|
Jeff Garzik, Bloq CEO, former bitcoin core dev team; opinions are my own. Visit bloq.com / metronome.io Donations / tip jar: 1BrufViLKnSWtuWGkryPsKsxonV2NQ7Tcj
|
|
|
misterbigg (OP)
Legendary
Offline
Activity: 1064
Merit: 1001
|
|
March 06, 2013, 04:57:02 PM |
|
...you can see how any long-running node will eventually accumulate a lot of dead weight. Wow...tightrope walking with no net. If blocks are always filled and fees go up, the SatoshiDICE transactions (low fee) will clog the memory pool and I guess eventually there will need to be a patch. The mempool only stores provably spendable transactions, so it is DoS'able, but you must do so with relay-able standard transactions. Why aren't mempool transactions purged after some fixed amount of time? This way someone could determine with certainty that their transaction will never make it into a block. Apologies if this has already been asked many times (it probably has).
|
|
|
|
conv3rsion
|
|
March 06, 2013, 06:37:34 PM |
|
14. Pleased with their profitability, miners refuse to accept any hard fork to block size.
Because why sell 1000 apples for $0.75 each when you can instead sell 10 for $1.00 each. Especially when your variable cost for additional apples is effectively zero. Makes perfect fucking sense. Even better, turns out there are enough oranges for everyone to have one, and nobody gives a shit about apples at all anymore.
|
|
|
|
jgarzik
Legendary
Offline
Activity: 1596
Merit: 1100
|
|
March 06, 2013, 07:12:06 PM |
|
...you can see how any long-running node will eventually accumulate a lot of dead weight. Wow...tightrope walking with no net. If blocks are always filled and fees go up, the SatoshiDICE transactions (low fee) will clog the memory pool and I guess eventually there will need to be a patch. Correct. It's not needed right now, thus we are able to avoid the techno-political question of what to delete from the mempool when it becomes necessary to cull. The mempool only stores provably spendable transactions, so it is DoS'able, but you must do so with relay-able standard transactions. Why aren't mempool transactions purged after some fixed amount of time? This way someone could determine with certainty that their transaction will never make it into a block. Apologies if this has already been asked many times (it probably has). As a matter of fact, that is my current proposal on the table, with has met with general agreement: Purge transactions from the memory pool, if they do not make it into a block within X [blocks | seconds]. Once this logic is deployed widely, it has several benefits: - TX behavior is a bit more deterministic.
- Makes it possible (but not 100% certain) that a transaction may be revised or double-spent-to-recover, if it fails to make it into a block.
- mempool is capped by a politically-neutral technological limit
Patches welcome I haven't had time to implement the proposal, and nobody else has stepped up.
|
Jeff Garzik, Bloq CEO, former bitcoin core dev team; opinions are my own. Visit bloq.com / metronome.io Donations / tip jar: 1BrufViLKnSWtuWGkryPsKsxonV2NQ7Tcj
|
|
|
TierNolan
Legendary
Offline
Activity: 1232
Merit: 1104
|
|
March 06, 2013, 07:28:19 PM |
|
As a matter of fact, that is my current proposal on the table, with has met with general agreement: Purge transactions from the memory pool, if they do not make it into a block within X [blocks | seconds]. Once this logic is deployed widely, it has several benefits: - TX behavior is a bit more deterministic.
- Makes it possible (but not 100% certain) that a transaction may be revised or double-spent-to-recover, if it fails to make it into a block.
- mempool is capped by a politically-neutral technological limit
Patches welcome I haven't had time to implement the proposal, and nobody else has stepped up. Clients should re-broadcast transactions or assume they are lost, if they fail to be included after X * 4 [blocks | seconds] I would also add a rule that a tx which is the same as a transaction already in the pool, except that it has a tx fee at least double the current version should replace the current version and be relayed. The client could tell the user the transactions failed to be sent and ask if the user wants to increase the fee.
|
1LxbG5cKXzTwZg9mjL3gaRE835uNQEteWF
|
|
|
jgarzik
Legendary
Offline
Activity: 1596
Merit: 1100
|
|
March 06, 2013, 07:57:17 PM |
|
Clients should re-broadcast transactions or assume they are lost, if they fail to be included after X * 4 [blocks | seconds]
The current behavior of clients is fine: rebroadcast continually, when you are not in a block. Optionally, in the future, clients may elect to not rebroadcast. That is fine too, and works within the current or future system.
|
Jeff Garzik, Bloq CEO, former bitcoin core dev team; opinions are my own. Visit bloq.com / metronome.io Donations / tip jar: 1BrufViLKnSWtuWGkryPsKsxonV2NQ7Tcj
|
|
|
solex
Legendary
Offline
Activity: 1078
Merit: 1006
100 satoshis -> ISO code
|
|
March 06, 2013, 08:09:27 PM Last edit: March 06, 2013, 09:08:54 PM by solex |
|
Let's be generous and assume an average 90% probability that each step in the predicted chain of events occurs as described...
Event Probability 1 100% 2 90% 3 81% 4 73% 5 66% 6 59% 7 53% 8 48% 9 43% 10 39% 11 35% 12 31% 13 28% 14 25%
End result:
25% Smooth transition. All: "Hail misterbigg" 75% Train wreck, emergency block size increase. misterbigg: "Sorry, my next prediction will be better!"
|
|
|
|
misterbigg (OP)
Legendary
Offline
Activity: 1064
Merit: 1001
|
|
March 06, 2013, 09:50:05 PM |
|
... 75% Train wreck, emergency block size increase. misterbigg: "Sorry, my next prediction will be better!" ...
heh...well, remember that the negative consequences for leaving the block size alone are far less severe than if we implement a faulty system for making it adjustable. Because if we do nothing, we can always change it later. The worst that happens is we have a period of time where transaction fees are higher than normal and take longer to confirm. Certainly not the end of the world by any stretch. Compare this with adjusting the block size and then discovering that well, yeah it seems retep was right about losing some decentralization due to bandwidth.
|
|
|
|
maxcarjuzaa
|
|
March 06, 2013, 10:21:28 PM |
|
100% agree with prediction, this can be seen as a major flaw in bitcoin.
maybe it is time to present the btc's little brother to the world, LTC,.
Is there any limit in LTC block size?
if the answer is yes maybe is a good time for the devs of both coins to colaborate.
We can have the gold and the silver like someone said.
better now than when it is to late
|
|
|
|
jmw74
|
|
March 07, 2013, 03:18:28 AM |
|
13. Spurred by the profitability of Bitcoin transactions, alternate chains appear to capture the users that Bitcoin lost. 14. Pleased with their profitability, miners refuse to accept any hard fork to block size.
I'm sorry, I don't get it. At step 13, transactions (and the fees they would have paid to miners) are fleeing bitcoin in droves. And at step 14, the bitcoin miners are *pleased* with this? Why? It makes no sense to me at all to impose a permanent hard limit of 1mb. Whatever reasons are given for keeping it, could be used as reasons to *lower* it. And no one thinks we should lower it. I don't agree with this "artificial scarcity" business, unless the point of it is to help level the playing field in terms of hardware requirements. In that sense, it's not really artificial scarcity, is it? It's scarcity of real resources: bandwidth, storage and CPU speed. I mean, we all agree that if everyone had 10 gigabit ethernet, 256 cores and 100tb of storage, the 1mb limit would seem laughable, right? Well, soon we'll all have that. And a few years after that we'll all have it the palm of our hand. Here's my modest (and likely naive) proposal. 1) See if a scheme to reduce resource consumption in the protocol can be worked out (I think storage requirements are already being addressed, but not sure about bandwidth) 2) Whatever comes of that, plot historical hardware capability progress, project the curve into the future. 3) Hard fork the client to follow the curve projection 4) If hardware doesn't end up matching predictions, fork again as necessary. I doubt a second fork would be needed for decades.
|
|
|
|
|