So any updates Small guys are dumping, but that does not mean much I dont think you're seeing small players dumping right now... I'm just as interested in what SecondMarket people are doing, though. I bought through SecondMarket around $700/btc and I'm delighted to stay the course. I wouldn't have sold even if I could have. Either Bitcoin et al are the biggest thing since electricity and the wheel or it goes to zero. As applications deploy then the utility/value of Bitcoin will rise. The lackluster sentiment right now is just fine. What'd I like to do is acquire more Bitcoins but that would compromise my diversification.
|
|
|
Base 16? What are the numbers 10, 11, 12, 13, 14, 15, 16 with that?
One common convention is; base 10 - base 16 --------- --------- 0 0 1 1 2 2 3 3 4 4 5 5 6 6 7 7 8 8 9 9 10 0A ** the leading zeros help to distinguish these hexdigits from other alphabetic characters 11 0B 12 0C 13 0D 14 0E 15 0F 16 10 I have created another Google sheet https://docs.google.com/spreadsheets/d/1kC3IfxBsl5VGTpc6un59m5U4CrsXtl7tgUeLbcZE7G8 to illustrate; click the "using letters" tab to see the classic base 16.
|
|
|
You're right. That was Gavin-bait. Well David Rabahy it's been six months and still no field report. Maybe next year. lol
Huh what? Field report from my talk at the CFR? I barely remember it, there were probably 50 people in suits in the audience, nobody I recognized. The whole thing is on video, there were no secret meetings, I got there 10 minutes before my talk and left 10 minutes after (I had a bunch of interviews with DC-based journalists scheduled... and I think that was the trip I had lunch with Jim Harper and got a tour of Cato, although I might be mis-remembering). Thank you; one wonders if the CFR might want a refresher eventually.
|
|
|
Based on this, for example, Polmine tends strongly toward smaller blocks. Meanwhile, DiscusFish/F2Pool does a much better job of producing bigger blocks.
|
|
|
Windows 8.1 Satoshi v0.9.3.0-g40d2041-beta example output from Debug console "getrawmempool true" command;
{ "000308b9c51a0ba76d57efd8897159d95b8278e4fc0e3cb480b3d15343a1aadd" : { "size" : 374, "fee" : 0.00010000, "time" : 1414369834, "height" : 327133, "startingpriority" : 4976624.92307692, "currentpriority" : 5160750.84553682, "depends" : [ "60c66a89e247760aa4cb29517ba79bbb2bbe773823996135fc7035c74f8be171" ] }, "00349a4799b7b787e9733f38fc01a8f5dc801f7e35e3071a706831395d67086e" : { "size" : 520, "fee" : 0.00000001, "time" : 1414209735, "height" : 326867, "startingpriority" : 40.33333333, "currentpriority" : 10311.10448718, "depends" : [ "75ba09c16b35b3495a7d829030dbafbed4e8e6806c8bc58207f8472e85749187" ] }, ... }
DOS batch file to collapse output so that each transactions ends up on a single line (good for feeding into Excel);
@echo off Setlocal EnableDelayedExpansion SET new_line= FOR /F "delims=" %%l IN (raw.txt) DO ( if "%%l" == "}," ( echo !new_line! SET new_line= ) ELSE ( SET new_line=!new_line! %%l ) )
To invoke;
C:\bitcoin>collapse >btc_txn.txt
|
|
|
I don't have historical data. But I just setup a rrdtool database to track the number of transactions on my full node. The stats for the last 24 hours are shown here [1], the pic is updated every 30 minutes and I will add more for 30 and 360 days once the database has enough data. As you can see from the little data that's allready there (collecting ~1 hour now) we are already closer to 4000 transactions waiting than to 2000. The raw data is gathered every minute with the following command bitcoind getrawmempool false | wc -l and is not filtered in any way that is not inherent to bitcoind. bitcoind getrawmempool true | grep fee | grep 0.00000000 | wc -l shows that right now 2792 of 3685 TX are without fee. I might make another database to improve the stats. [1] http://213.165.91.169/pic/mempool24h.pngInspirational! Starting from your spark I found https://blockchain.info/tx/e30a4add629882d360bc87ecc529733a9824d557690d1e5769453954ea4a1056. It appears to be the oldest transaction waiting at this moment. It was 31:34 old at the time of block #327136. Block #326954 was the first block that could have added it to the block chain; 182 blocks ago. One wonders how old the oldest transaction is that includes a fee.
|
|
|
But clearly some blocks are already full right up to the 1MB limit. I've been doing transactional systems for 30+ years; the serious trouble will start when the average over reasonable periods of time, e.g. an hour or so but not more than a day, begins to approach ~70%. http://en.wikipedia.org/wiki/Little's_lawPer https://blockchain.info/charts/n-transactions?showDataPoints=true×pan=1year&show_header=true&daysAverageString=1&scale=0&format=csv&address=, Nov. 28, 2013 had the most transactions in a day, i.e. 102010. From https://blockchain.info/block-height/271850 to https://blockchain.info/block-height/272030, i.e. 180 blocks that day, one wonders what the block size distribution looked like. Gosh, it would be useful to have the size of the pool of waiting transactions at that time. Per https://blockchain.info/charts/n-transactions-per-block?showDataPoints=false×pan=1year&show_header=true&daysAverageString=1&scale=0&format=csv&address=, we had an average of 560 transactions per block (only the 8th highest day so far). Feb. 27, 2014 had the highest average transactions per block of 618 so far. April 3, 2014 had the highest average block size at 0.365623MB. Arg, a day is too long. I just bet the hourly average peaks around 70% of 1MB. Does *anyone* have a record of the pool or waiting transactions? That's our key. When there are ~2,000 transactions in the queue waiting then we would expect a full 1MB block to be coming out next. When there are ~4,000 transactions in the queue waiting then we would expect two full 1MB blocks to be coming out next. In this state, transactions can expect to take ~20 minutes to confirm. ~6,000 waiting -> 30 minute confirmation times. And so on. 7t/s * 60s/m = 420t/m, 420t/m * 10m/block = 4200t/block. That does not match observations: Observations reveal only about 2000t/block. 2000t/block * 1block/10m = 200t/m, 200t/m * 1m/60s ~= 3.3t/s. Who thinks we can squeeze 4200t/block? 3.3t/s * 86400s/d = 285,120t/d. Trouble is closer than we thought. 70% * 285.120t/d = 199,584t/d. Gentlemen, I've seen this too many times before; when the workload grows to somewhere north of 200,000t/d we *will* begin to see the pool of waiting transactions grow to tens of thousands and confirmation times will be well over an hour. Increase the MAX_BLOCKSIZE as soon as is reasonable. 20MB, 32MB, whatever. Then enhance the code to segment blocks to exceed the API limit after that.
|
|
|
Um, is that it? How do we know if we've reached consensus? When will the version with the increased MAX_BLOCKSIZE be available?
|
|
|
So who are we kidding with this? Are we doing the block segment code now or later? Bump it to 32MB now to buy us time to do the block segment code.
|
|
|
1) What is the maximum value for MAX_BLOCKSIZE functionally possible given the APIs being used?
2) What is the maximum value which has been tested successfully? Have any sizes been tested that fail?
3) Why not just set it to that value once right now (or soon) to the value which works and leave it at that? 3.1) What advantage is there to delaying the jump to maximum tested value?
No miner is consistently filling up even the tiny 1MB blocks possible now. We see no evidence of self-dealing transactions. What are we afraid of?
Heck, jump to 20MB and grow it at 40% for 20 years; that's fine with me *but* be prepared to alter that if there be a need. How will we know we need to jump it up faster? A few blocks at the current maximum is hardly a reason to panic but when the pool of transactions waiting to be blocked starts to grow without any apparent limit then we've waited too long.
|
|
|
Hmm, it came only 19 seconds (if the timestamps can be trusted) after the previous one; lucky guy.
|
|
|
I dunno know; here I am watching for blocks at or near the limit of 1MB and along comes ... it just seems strange to me https://blockchain.info/block-height/326639 -- apparently the miner couldn't be bothered to include even just one other transaction except the coinbase transaction in the block? Could the pool have been empty from his point of view? Miner algorithm: listen for a block to be broadcast and immediately begin searching for the next block with only their coinbase transaction in it, ignore all other transactions. Is there some sort of advantage to ignoring the other transactions?
|
|
|
Another nice big block https://blockchain.info/block-height/326505 came through while we discuss the topic, yet the backlog of transactions https://blockchain.info/unconfirmed-transactions wasn't huge really at some amount just over 4000 (or are we just getting used to such big backlogs?). We are bumping into the ceiling gentlemen. It is safe to say we will begin to accumulate a bigger backlog pretty soon, when we start getting multiple blocks in a row near the current 1MB limit. In my experience, in terms of queuing theory http://en.wikipedia.org/wiki/Queueing_theory, we can expect real signs of trouble as the average block size over a reasonable period of time, e.g. an hour or maybe more like a day, begins to exceed 70% of the maximum. I'm going to try to build a model using JMT http://jmt.sourceforge.net/. Perhaps we could two-step our way to the functional maximum. We need to find the reliable functional maximum via testing. To give ourselves some time to find it perhaps we could increase MAX_BLOCK_SIZE to the proposed 20MB right away (or as soon as is reasonable) and then work diligently to find the greatest workable maximum and then jump to it when we're ready.
|
|
|
I have an idea. Why not ask everyone in Bitcoin what they think we should do, then just do all of them! Or, we can just debate each idea until it no longer matters since the year will be 2150.
Essentially a lot of ideas are being tried out via altcoins. Rushing to do anything just to get something done does not seem prudent. Hesitating forever will lead naturally to real consequences. Waiting for the MAX_BLOCK_SIZE to become an emergency is waiting too long. https://bitcointalk.org/index.php?topic=419185.msg4552409#msg4552409 was an attempt to find the biggest queue to date.
|
|
|
.. more about this: there's actually the opposite kind of manipulation (or rather attack) possible: empty blocks. Right now they exist but don't hurt anyone; here they would push the max block size down, hurting the network.
Would it be reasonable to reject blocks with too few transactions in them if the pool of transactions waiting is above some threshold? https://bitcointalk.org/index.php?topic=165.msg1595#msg1595 gets at my point.
|
|
|
Anyone that wants to transact off-chain is able to do so independent of MAX_BLOCK_SIZE.
|
|
|
A maximum block size which is too small will naturally lead to more off-chain activity; folks/entities will not be denied the ability to transact.
A maximum block size which is too big thwarts participation by bandwidth-starved nodes. So?
I propose we set MAX_BLOCK_SIZE to the maximum functional value possible today and walk away trusting the future to the caretakers then. If any idiot/malicious bad actors try to take advantage of it and attack then Bitcoin was vulnerable to that already anyways. No one is even filling up the current 1MB blocks with self-dealing transactions as it is. Remind me again why it was lowered?
|
|
|
|