weex (OP)
Legendary
Offline
Activity: 1102
Merit: 1014
|
|
January 14, 2017, 01:39:35 AM Last edit: January 22, 2017, 04:28:46 AM by weex |
|
In the interest of advancing the state of the art of fee estimation, I've collected a 1.2 million transactions worth of confirmation time data spanning the last week. Here's a scatterplot: https://i.imgur.com/qcJIR6c.png compare it to the one last week at https://i.imgur.com/FGGBYpe.pngThe file is csv with the fields below. fee_rate - satoshis/byte of transaction conf_blocks - # of blocks for transaction to confirm conf_time - time in seconds for transaction to be confirmed transaction_id - txid first_seen - time transaction first imported into database, this is using a process that checks a local Bitcoin Core node every 20 seconds first_confirmed- time transaction imported from the first block it was seen in, orphaning is not handled here and as the db has grown it's taking a bit longer to mark an entire block's worth of transactions confirmed fee - fee included in transaction in satoshis size - size of transation in bytes Data files: http://www.filedropper.com/txdb2tarhttp://www.filedropper.com/conftimesThe first one, without conf_blocks: http://www.filedropper.com/confirmationtimesThis data was collected with https://github.com/weex/bitcoin-fee-distribution which also collects some other info like block #, block hash, # of inputs, # of outputs. One thing that might matter is saving txid's of inputs if they are also in the mempool but the script doesn't do that yet. Enjoy.
|
|
|
|
coinsocieties
|
|
January 14, 2017, 03:06:02 AM |
|
Very interesting to look at, but I do not completely understand some of this. I get the general concept, but I do not see much of a pattern. My buddy said he sees a patter of such, but it is broken up to much to make sense of it to me.
|
|
|
|
franky1
Legendary
Offline
Activity: 4368
Merit: 4744
|
|
January 14, 2017, 04:07:04 AM |
|
based on the million tx's (mine stopped at 1,048,575 results)
average tx confirmed in 40mins 30mins average tx size 458bytes average tx fee 36892sat-41750sat (depending on if you include or exclude the 0fee tx's in the average)
average fee per byte is 91sat/byte max fee per byte of range 34883 sat/byte min fee per byte of range 0 -- as for the max fee.. either the source data has an error or someone lastweek paid ALOT for one of their transaction
max tx size 98888 bytes (98.9KB) min tx size 170 bytes -- as for the max bytes..either the source data has an error or someone lastweek had a near 99kb tx(filling 10% of block with 1tx)
|
I DO NOT TRADE OR ACT AS ESCROW ON THIS FORUM EVER. Please do your own research & respect what is written here as both opinion & information gleaned from experience. many people replying with insults but no on-topic content substance, automatically are 'facepalmed' and yawned at
|
|
|
weex (OP)
Legendary
Offline
Activity: 1102
Merit: 1014
|
|
January 14, 2017, 08:15:55 AM |
|
|
|
|
|
Amph
Legendary
Offline
Activity: 3248
Merit: 1070
|
|
January 14, 2017, 08:21:43 AM |
|
based on the million tx's (mine stopped at 1,048,575 results)
average tx confirmed in 40mins 30mins average tx size 458bytes average tx fee 36892sat-41750sat (depending on if you include or exclude the 0fee tx's in the average)
average fee per byte is 91sat/byte max fee per byte of range 34883 sat/byte min fee per byte of range 0 -- as for the max fee.. either the source data has an error or someone lastweek paid ALOT for one of their transaction
max tx size 98888 bytes (98.9KB) min tx size 170 bytes -- as for the max bytes..either the source data has an error or someone lastweek had a near 99kb tx(filling 10% of block with 1tx)
is the average tx size increasing in the time? because i remember it was 300bytes before, this also lead to more fee of course if i remember correctly the size of a tx is only based on how many imput you receive and some byte from the output this mean that many are doing few big transaction and receiving many small one? correct?
|
|
|
|
weex (OP)
Legendary
Offline
Activity: 1102
Merit: 1014
|
|
January 14, 2017, 08:23:58 AM |
|
Big transactions skew the mean and I noticed the 99kb one was never confirmed, so whether it was confirmed is probably important for calcs.
|
|
|
|
ArcCsch
Full Member
Offline
Activity: 224
Merit: 117
▲ Portable backup power source for mining.
|
|
January 14, 2017, 01:56:03 PM |
|
Someone is trying to combine dust transactions into one address. All the inputs are less than a milli. If the average fee rate is 0.91μ BTC/byte, and each input contributes 180 bytes, addresses with less than 0.16 m BTC are useless dust. As for the data, can someone please make a scatter-plot of time vs. fee rate? This should not be to difficult to do with Excel, I tried but could not get it to work.
|
If you don't have sole and complete control over the private keys, you don't have any bitcoin! Signature campaigns are OK, zero tolorance for spam! 1JGYXhfhPrkiHcpYkiuCoKpdycPhGCuswa
|
|
|
alyssa85
Legendary
Offline
Activity: 1652
Merit: 1088
CryptoTalk.Org - Get Paid for every Post!
|
|
January 14, 2017, 02:59:44 PM |
|
based on the million tx's (mine stopped at 1,048,575 results)
average tx confirmed in 40mins 30mins average tx size 458bytes average tx fee 36892sat-41750sat (depending on if you include or exclude the 0fee tx's in the average)
average fee per byte is 91sat/byte max fee per byte of range 34883 sat/byte min fee per byte of range 0 -- as for the max fee.. either the source data has an error or someone lastweek paid ALOT for one of their transaction
max tx size 98888 bytes (98.9KB) min tx size 170 bytes -- as for the max bytes..either the source data has an error or someone lastweek had a near 99kb tx(filling 10% of block with 1tx)
And that in a nutshell is the problem with bitcoin - 40 minutes is too long. Stuff needs to be confirmed in a few minutes if it is to have practical value as a currency. Then add in that the fees for this poor service are high, and why would anyone bother?
|
|
|
|
ranochigo
Legendary
Offline
Activity: 3038
Merit: 4420
Crypto Swap Exchange
|
|
January 14, 2017, 03:22:56 PM |
|
is the average tx size increasing in the time? because i remember it was 300bytes before, this also lead to more fee of course
No. If you used compress key, for a transaction with 1 input, 1 output, that would be quite near the size you would get. if i remember correctly the size of a tx is only based on how many imput you receive and some byte from the output
Yes. Each input should occupy about 34 bytes. It's how many inputs you send and how many UXTOs you create. this mean that many are doing few big transaction and receiving many small one? correct?
You can't assume that.
|
|
|
|
ArcCsch
Full Member
Offline
Activity: 224
Merit: 117
▲ Portable backup power source for mining.
|
|
January 14, 2017, 06:16:59 PM |
|
Much of the spam in the blockchain comes from the following sources: 1. Faucets 2. Gambling 3. Dust change addresses Faucets are a pathetic way to make bitcoin, I know this by personal experience. However, they serve two purposes for newbies. They fulfil the newbies need to experiment with addresses and transactions, and, to a newbie, it is quite exciting to get their first chunk of bitcoin (personal experience). The first need can be satisfied by testnet coins, but the second reason is harder to eliminate, and is likely the main reason faucets are so prevalent. Gambling spams up the blockchain and provides entrainment (honourable enough) and the potential for addiction and large loss (this ruins many people's lives).
Dust change addresses, however, are a problem that can be reduced. Say your wallet drafts a transaction using up several outputs to produce a payment. Most of the transaction goes into the payment output, but a small amount is left over. Adding another output cost only about 0.03094mBTC, but using that output costs 0.1638mBTC, for a total of about 0.2mBTC. The fix I suggest, is for when the leftover is too small (less than, for example, 1mBTC), for the wallet to overpay instead of creating a change address. This should certainly not be a problem for the recipient, and it would reduce blockchain spam.
|
If you don't have sole and complete control over the private keys, you don't have any bitcoin! Signature campaigns are OK, zero tolorance for spam! 1JGYXhfhPrkiHcpYkiuCoKpdycPhGCuswa
|
|
|
weex (OP)
Legendary
Offline
Activity: 1102
Merit: 1014
|
|
January 14, 2017, 06:19:22 PM |
|
Someone is trying to combine dust transactions into one address. All the inputs are less than a milli. If the average fee rate is 0.91μ BTC/byte, and each input contributes 180 bytes, addresses with less than 0.16 m BTC are useless dust. As for the data, can someone please make a scatter-plot of time vs. fee rate? This should not be to difficult to do with Excel, I tried but could not get it to work. This must be done outside a spreadsheet as none of those handle more than 64k records well. Pyplot is setup to do some graphing in the repo that collected this data but maybe someone wants to attack this with R?
|
|
|
|
jak3
Legendary
Offline
Activity: 1274
Merit: 1004
|
|
January 14, 2017, 06:43:06 PM |
|
well good work and 80mb is like a pritty big set to calculate but the more the better we are gonna make a good calculations and some stat on this soon. well its a diffrent thing that now i have to wait more 1hour before going to bed. i am excited to see all those collected reports which are gonna revel many questions
|
|
|
|
ArcCsch
Full Member
Offline
Activity: 224
Merit: 117
▲ Portable backup power source for mining.
|
|
January 14, 2017, 06:52:05 PM |
|
|
If you don't have sole and complete control over the private keys, you don't have any bitcoin! Signature campaigns are OK, zero tolorance for spam! 1JGYXhfhPrkiHcpYkiuCoKpdycPhGCuswa
|
|
|
franky1
Legendary
Offline
Activity: 4368
Merit: 4744
|
|
January 14, 2017, 06:52:55 PM |
|
is the average tx size increasing in the time? because i remember it was 300bytes before, this also lead to more fee of course
the numbers for average txbytes over 8 years has changed. EG 2009 was under 250 and it become more over time. the numbers of the OP's data in the spreadsheet is only numbers of 1m tx's which is well under 500 blocks = less than a week of data so its not going to reveal much long term change just short term. i did do a few selective averages of 0-335k tx's = 447byte 335k-666k tx's =473byte 666k-1m tx's =454byte and they all average about 447bytes - 473bytes is the average tx size increasing in the time? because i remember it was 300bytes before, this also lead to more fee of course if i remember correctly the size of a tx is only based on how many imput you receive and some byte from the output this mean that many are doing few big transaction and receiving many small one? correct?
using old legacy transactions ((148 * inputsused) + (34 * outputs used) ) +-10 variance = tx size estimate as for multisigs. well that a whole different calculation to work out the bytes of a tx, as there are more variables involved im sure someone else has found a workable calculation to work it out for multisigs. but to answer your question. multisigs do use more bytes per tx.. if you were to compare a 2in 2out multsig to a 2in 2out legacy tx oh and lets not forget LN's settlments which will also include extra bytes for CLTV and CSV data will bloat a tx even if its still just a 2in 2out. yep segwit suggests more tx space but then LN settlements then refill that space with larger tx's..
|
I DO NOT TRADE OR ACT AS ESCROW ON THIS FORUM EVER. Please do your own research & respect what is written here as both opinion & information gleaned from experience. many people replying with insults but no on-topic content substance, automatically are 'facepalmed' and yawned at
|
|
|
weex (OP)
Legendary
Offline
Activity: 1102
Merit: 1014
|
|
January 14, 2017, 08:44:07 PM Last edit: January 14, 2017, 09:10:52 PM by weex |
|
Made a scatterplot with R: http://imgur.com/cPkJ6tqThe code to make this is: tx <- read.csv(file="confirmation_times.csv",sep=",",head=TRUE) plot(tx$fee_rate,tx$conf_time, log="xy", pch = 20, cex=0.05) In the plot command, both axes are set as log scale, pch=20 means draw a dot and cex=0.05 means scale it down so it's a about a pixel.
|
|
|
|
Velkro
Legendary
Offline
Activity: 2296
Merit: 1014
|
|
January 15, 2017, 12:55:30 AM |
|
based on the million tx's (mine stopped at 1,048,575 results)
average tx confirmed in 40mins 30mins average tx size 458bytes average tx fee 36892sat-41750sat (depending on if you include or exclude the 0fee tx's in the average)
average fee per byte is 91sat/byte max fee per byte of range 34883 sat/byte min fee per byte of range 0 -- as for the max fee.. either the source data has an error or someone lastweek paid ALOT for one of their transaction
max tx size 98888 bytes (98.9KB) min tx size 170 bytes -- as for the max bytes..either the source data has an error or someone lastweek had a near 99kb tx(filling 10% of block with 1tx)
Personally i don't believe this data. Why? Because if that was true, that average tx confirmation is 40 min. that would be pure fauilure of bitcoin design of 10 minutes confirmations. Data could be badly calculated becase of edge cases vastly diffirent from other data, included in calculation.
|
|
|
|
franky1
Legendary
Offline
Activity: 4368
Merit: 4744
|
|
January 15, 2017, 01:50:59 AM Last edit: January 15, 2017, 02:31:27 AM by franky1 |
|
Personally i don't believe this data. Why? Because if that was true, that average tx confirmation is 40 min. that would be pure fauilure of bitcoin design of 10 minutes confirmations. Data could be badly calculated becase of edge cases vastly diffirent from other data, included in calculation.
firstly the 10min expectation is not actually the bitcoin rule.. no tx is guaranteed to be accepted in 10 minutes. the rule of bitcoin is that 2016 blocks should be produced in a fortnight. no rule that forces a tx into a block. no rule that forces XXXX tx's per block either. so bitcoin could have empty blocks forever and actually meet its protocol rules. the 10minute per block is dividing 2 weeks by 2016 blocks..
now lets get into the details of the transactions.. an average block can only store ~2200tx or max of 1mb of data check the mempool count.. yep more than 2200tx or 1mb waiting most of the time. https://blockchain.info/unconfirmed-transactions - 3blocks of tx's waiting at time of writing this post = ~30min wait for some tx's (using velkro's very simplistic time overview of bitcoin confirms)yea sometimes the mempool count can be low and all tx's get into a block promptly. other times it can take a couple blocks or upto an hour. depending on demand, and other criteria id say the data is not bad. its actually accurate.. you just have to understand the context of the data.
so here is the context. not everyone pays excess/top fee to be first inline. some pay minimum fee which then gets outbid by 2200 others who pay slightly higher so that the minimum payer is left waiting lower in the queue.. and yep some pay no fee at all. meaning they wont get accepted for hours. and its things like paying minimum or no fee that push the range of times out. which then affect the average time.. im sure people can go through the data and selectively delete out the tx's of zero fee.. but then getting selective/creative/manipulative over what tx's are deemed worthy of being part of someones expectations. is what starts being 'manipulative' and causing bad data
edit i just checked the couple hundred tx's with 0 fee.. average confirm time was 5 hours 7 minutes 37 seconds funny part is there were even tx that paid over $100 at the time (last week btcprice over $1k/btc) 0.1btc and the average of these big spenders were 11mins 32secs (basically bribbed their way to the top of the list) with silly huge fee's txid: 61d9e2841e462f0a73668bf37601f2c021e9a90a3810ef654a21063b5722840a txid: 3a5546217b76ae91f0fd113dc7f8c863fd9099ab62abedaa71f2a856ccd48d6f txid: 2fce0c36505aece2fa77df5f3bc02cf7d5ffe5231e7a41597b51d4a2ffb61383 txid: 3ea07465f19e188535766c1d4f60b6b5b968294212a5778b65ed13889c753636 txid: 0c48281a819ca34ae837297e1ece737dc779d7eee0025c8a46e4e87fc6658696 txid: d8d194e4ae415323a90a56cd999e2e7cca9dfe13258a4e82469223b8f2fbbc8c txid: d14d0ddbdc269ebe09174a9d02e85b13c9ae97fc27e6e57cba196ef7c46ddab3 txid: 99901d44db56788b74999c0f6b4f3bc1c960fec61761b447be878ae1721ec6e4 txid: e78f3045c8348ff5882da99c0f8555294d889b641c820c82cc6f3ab62df103b0 txid: f6bf8c706fabb59489b1152cac038c69dbd565bd23dfed35d8131a08a655b846 txid: 415856aefeb42a6050abb8ec9b66b8a3688fd24632f36299b5deebf1a16e5c85 txid: e9ad2d09ec5de999b723ce8e74667243ccb7656ce09920c2b76fd91dea8b89ff txid: 5ce70be2cf3163fad5192daf8356f9819f79622894c510c192929e78a714b332 txid: adf3dccfa9b2e24a5dfe7c997e927f6ac6865780c78f8afed388f34603883033
|
I DO NOT TRADE OR ACT AS ESCROW ON THIS FORUM EVER. Please do your own research & respect what is written here as both opinion & information gleaned from experience. many people replying with insults but no on-topic content substance, automatically are 'facepalmed' and yawned at
|
|
|
weex (OP)
Legendary
Offline
Activity: 1102
Merit: 1014
|
|
January 15, 2017, 02:17:26 AM |
|
Another pic with better axes: https://i.imgur.com/FGGBYpe.pngtx <- read.csv(file="confirmation_times.csv",sep=",",head=TRUE) plot(tx$fee_rate,tx$conf_time, log="xy", yaxt="n", xaxt="n", pch = 20, cex=0.05) marks <- c(0,60,600,3600,86400) axis(2,at=marks,labels=marks) xmarks <- c(1,5,10,20,100,300,1000,5000) axis(1,at=xmarks,labels=xmarks)
|
|
|
|
Amph
Legendary
Offline
Activity: 3248
Merit: 1070
|
|
January 15, 2017, 07:22:07 AM |
|
is the average tx size increasing in the time? because i remember it was 300bytes before, this also lead to more fee of course
the numbers for average txbytes over 8 years has changed. EG 2009 was under 250 and it become more over time. the numbers of the OP's data in the spreadsheet is only numbers of 1m tx's which is well under 500 blocks = less than a week of data so its not going to reveal much long term change just short term. i did do a few selective averages of 0-335k tx's = 447byte 335k-666k tx's =473byte 666k-1m tx's =454byte and they all average about 447bytes - 473bytes this is what i was talking about, and the only explanation is that people are receiving small transaction and sending big transaction, and doing this will only increase the fee
|
|
|
|
|
|