AFAICT it's not private at all.
|
|
|
Let me ask again.
Does anyone know the average number of unconfirmed TX's that exist in mempool at any given time, the average size in MB (i know <1) and what % of them get cleared into a block every 10 minutes or so?
https://blockchain.info/en/unconfirmed-transactionsI have glanced over that page a few times, and it seems, when it is high (4000 tx or more), it is filled with transactions that are never going to be included; dust, and zero fee transactions that should have a fee due to size and output age. Logically, when blocks are not full, the queue of valid transactions should be zero. I think we will not see a long list of valid unconfirmed transactions before we are quite close to the block size limit on a 24 hour basis. that's even stronger evidence against this attack. if the available unconf tx's are mostly invalid b/c of dust limits or being non-std, then those couldn't be included in such a bloated block. Go to statoshi.info and look at "tx and blocks" section to have an idea of the trend
|
|
|
Let me ask again.
Does anyone know the average number of unconfirmed TX's that exist in mempool at any given time, the average size in MB (i know <1) and what % of them get cleared into a block every 10 minutes or so?
https://blockchain.info/en/unconfirmed-transactions
|
|
|
I agree this has potential. It'd be a clean, permanent solution with nothing arbitrary about it. Sometimes we just don't see the simple things? Too-large-blocks are already disincentivized by their slow propagation and resulting higher orphan risk. I read that many pools already use a "soft limit" well below 1 MB because of this. It would probably make most sense for a miner to calculate the cost of the orphan risk (needs some assumptions, I guess) and contrast that with the potential additional tx fees being earned and make tx inclusion decisions based on that calculation. what will happen when block propagation will be O(1)? (see: https://gist.github.com/gavinandresen/e20c3b5a1d4b97f79ac2)
|
|
|
As Odalv already said tx size can be really big. If you go over the 100KB a transaction is considered non-standard though, see: https://github.com/bitcoin/bitcoin/blob/master/src/main.cpp#L603-L611and https://github.com/bitcoin/bitcoin/blob/master/src/main.h#L55-L56 // Extremely large transactions with lots of inputs can cost the network // almost as much to process as they cost the sender in fees, because // computing signature hashes is O(ninputs*txsize). Limiting transactions // to MAX_STANDARD_TX_SIZE mitigates CPU exhaustion attacks. unsigned int sz = tx.GetSerializeSize(SER_NETWORK, CTransaction::CURRENT_VERSION); if (sz >= MAX_STANDARD_TX_SIZE) { reason = "tx-size"; return false; }
/** The maximum size for transactions we're willing to relay/mine */ static const unsigned int MAX_STANDARD_TX_SIZE = 100000;
That said as long as you find a miner/poll that accept to include your tx in the next block you're golden. As soon as I have time I'll look for the actual txs that Peter Todd was referring to. thanks for that. but even if a miner tries to incl a non std tx into his solved block, it's quite probable that the majority of nodes/miners still reject the block as a result, no? The fact that the book is there on the blockchain means that the nonstandard tx was included in a block. You need to find a pool accepting your nonstd tx, and it seems that eligius would do it if you pay enough fees. How are transactions selected for blocks? Transaction Processing Current transaction selection policy:
Both standard and non-standard transactions, including arbitrary P2SH, are mined if they pay a reasonable fee. Probable spam incurs a penalty on transaction priority. Dust, confirmed spam, "bare" multisig, and custom "dangerous" scripts will not be mined under any conditions. I've never tried but as long as you forward the tx directly to the eligius node it should work. In fact it seems to me that nonstandard txs are not relayed by full nodes, but I could be wrong.
|
|
|
Bitcoin vs Blockchain is still going at full swing in the mainstream press. Are blockchains here to stay, in one guise or another? Just because bitcoin didn’t succeed as a currency doesn't mean blockchain will succeed as a technology, but the experiment is important to run," says Patrick Collison of Stripe, a payments processor. The possible uses are legion, but the killer app is still missing.
I didn't realize that bitcoin the currency already failed.
|
|
|
As Odalv already said tx size can be really big. If you go over the 100KB a transaction is considered non-standard though, see: https://github.com/bitcoin/bitcoin/blob/master/src/main.cpp#L603-L611and https://github.com/bitcoin/bitcoin/blob/master/src/main.h#L55-L56 // Extremely large transactions with lots of inputs can cost the network // almost as much to process as they cost the sender in fees, because // computing signature hashes is O(ninputs*txsize). Limiting transactions // to MAX_STANDARD_TX_SIZE mitigates CPU exhaustion attacks. unsigned int sz = tx.GetSerializeSize(SER_NETWORK, CTransaction::CURRENT_VERSION); if (sz >= MAX_STANDARD_TX_SIZE) { reason = "tx-size"; return false; }
/** The maximum size for transactions we're willing to relay/mine */ static const unsigned int MAX_STANDARD_TX_SIZE = 100000;
That said as long as you find a miner/poll that accept to include your tx in the next block you're golden. As soon as I have time I'll look for the actual txs that Peter Todd was referring to.
|
|
|
Since then, Rusty's LN paper has appeared, and there is even less reason for delay in buying time, because there is a very real chance that a high-volume layer can be built over Bitcoin while keeping main-chain usage to a minimum. Why won't the devs buy a few years?
sorry I've 0 time to contribute to discussion to any degree of usefulness, just a minor nitpick Lightning Network isn't a Rusty Russel's creation, Joseph Poon and Thaddeus Dryja are the main authors. That said Rusty provide a fantastic series of posts explaining the key concepts behind LN: http://rusty.ozlabs.org/?p=450. One last thing Rusty was recently hired by blockstream. Thanks for the correction! And the last point answers something that was puzzling me. this is the public post where he announced the move form IBM to blockstream: https://plus.google.com/103188246877163594460/posts/eEeJxNaajWg
|
|
|
Since then, Rusty's LN paper has appeared, and there is even less reason for delay in buying time, because there is a very real chance that a high-volume layer can be built over Bitcoin while keeping main-chain usage to a minimum. Why won't the devs buy a few years?
sorry I've 0 time to contribute to discussion to any degree of usefulness, just a minor nitpick Lightning Network isn't a Rusty Russel's creation, Joseph Poon and Thaddeus Dryja are the main authors. That said Rusty provide a fantastic series of posts explaining the key concepts behind LN: http://rusty.ozlabs.org/?p=450. One last thing Rusty was recently hired by blockstream.
|
|
|
Is this a consequence of p2sh?
Yes, it is. They used the 520 byte of the redeemScript to "inject" suitably padded text for easy recovery with strings(1) (i.e. strings -n 20).
|
|
|
Matt Corallo, Blockstream co-founder, just sent an email with the subject "Block Size Increase" on btc dev mailing list , you could read it all here: http://sourceforge.net/p/bitcoin/mailman/message/34090292/quoting the relevant part: Personally, I'm rather strongly against any commitment to a block size increase in the near future. Long-term incentive compatibility requires that there be some fee pressure, and that blocks be relatively consistently full or very nearly full. What we see today are transactions enjoying next-block confirmations with nearly zero pressure to include any fee at all (though many do because it makes wallet code simpler).
bold is mine
|
|
|
Justus laid out a structure. There are a great number of missing pieces between what exists today in the consensus code and that structure. There are also undefined and missing elements in the ecosystem. I like the structure too, but we aren't anywhere close to it yet and it won't be soon.
The current proposal is Gavin's most reasonable one yet. He dropped the assumptions about the future, took out most of the guesswork, and is implementing the method satoshi offered when the antispam 1mb limit was created in 0.3 or so.
More would be great, and future proof would be even better, no limit best.... but... remember, we are still in beta. The pieces to do any of that are not written, tested, integrated, accepted and secured.
Bitcoin is still a baby. So as eager as we all are to get there, we will get there by baby steps or end up not getting at all. There are a great many looking for us to fail (check the short interest lately?) we help that failure by over-extending what can be done.
20mb gives us some room to build more of the missing pieces. Its enough for now.
I can't have said it better, really. I'm aware that bitcoin development is quite peculiar and there's little to no margin for errors. And this is the reason why I think it's important to start spreading Justus's idea in advance, because as you rightly said we need a lot of time to get there. So the sooner we start the better. That said I wonder what are the link you see between the price discovering mechanism sketched by Justus and the Bitcoin consensus code? I see them as quite unrelated pieces of the system, but maybe I am missing something obvious.
|
|
|
interesting logarithmic Monte Carlo simulation of tx confirmation delays related to progressive filling of current 1MB block size. bottom line is we need an increase in the block size otherwise confirmation delays will exponentially increase as the fill reaches the max. note in the conclusion who would benefit from NOT increasing the block size; "SC's". no wonder gmax and LukeJr are spamming Reddit in defiance of Gavin's proposal: http://hashingit.com/analysis/34-bitcoin-traffic-bulletinNice graph! Thanks for sharing. So with a 100% filled block you have a 0.1 probability of having your tx confirmed after 1000/sec. This is nasty. As I already said multiple times I'm a big supporter of Justus proposal to eliminate false block size scarcity and introducing economic incentive in the p2p nodes network. Nonetheless I think that a part of txs will occur out of bitcoin main chain, through payment hub, lightning network, you name it. In fact the very solution Justus proposed to implement a price discovery mechanism in Bitcoin p2p network is based on micropayment channels.
|
|
|
With UTXO merkle tree the whole thing does not need to be in phone RAM. Or even on the phone. phone clients could validate and fwd txns iff they are connected to wifi. The only parts of the UTXO merkle tree that needs to be processed is the logn route from each UTXO involved in a txn to the tree root. So very doable on today's mid range smart phone esp with a good sized uSD expansion.
On that matter I've just found out an etotheipi's (armory core dev) proposal "Ultimate blockchain compression w/ trust-free lite nodes" that he made in June 2012. This is the summary: Use a special tree data structure to organize all unspent-TxOuts on the network, and use the root of this tree to communicate its "signature" between nodes. The leaves of this tree actually correspond to addresses/scripts, and the data at the leaf is actually a root of the unspent-TxOut list for that address/script. To maintain security of the tree signatures, it will be included in the header of an alternate blockchain, which will be secured by merged mining.
This provides the same compression as the simpler unspent-TxOut merkle tree, but also gives nodes a way to download just the unspent-TxOut list for each address in their wallet, and verify that list directly against the blockheaders. Therefore, even lightweight nodes can get full address information, from any untrusted peer, and with only a tiny amount of downloaded data (a few kB).
More recently TierNolan with his "Locally verifiable unspent transaction output commitments" is "sketching" his idea of a potential implementation. A proposal to help SPV nodes verify blocks is to commit the root of an (unbalanced) Merkle tree containing all unspent (and spendable) transaction outputs. It is possible to create proofs of validity for each modification of the set. The proof would prove that inserting an entry into a tree with a root of X will give a new tree with a root of Y. There would also be proofs for removing entries. it seems that the idea of an "UTXO" light node to add to the the list of the already implemented nodes (full, spv, pruned) is gaining momentum.
|
|
|
Speaking about recent Justus' activity I forgot to mention "Reusable payment codes": https://github.com/justusranvier/rfc/blob/payment_code/bips/bip-pc01.mediawikiThis link contains an RFC for a new type of Bitcoin address called a "payment code"
Payment codes are SPV-friendly alternatives to DarkWallet-style stealth addresses which provide useful features such as positively identifying senders to recipients and automatically providing for transaction refunds.
Payment codes can be publicly advertised and associated with a real-life identity without causing a loss of financial privacy.
Compared to stealth addresses, payment codes require less blockchain data storage.
Payment codes require 65 bytes of OP_RETURN data per sender-recipient pair, while stealth addresses require 40 bytes per transaction.
|
|
|
Justus's seminal work on incentivizing full nodes will take some time to be assimilated into the mainstream of the community, but I think it will have to be. Either through some high-powered explanatory tool or by the market forcing it on everyone when rubber meets road, which will be unsettling for many. Speaking of high-powered explanatory tools, it would be great if there were a sort of "hard money wiki/FAQ," for lack of a better term, that explains the economic points often raised in this thread for a general (Bitcoin-conversant) audience. Not just why Bitcoin is replacing gold, though that's a core part of it, but all the points about incentives and economics that are are typically misunderstood in the wider community. Things like Money as Memory, the role of investors ("who controls Bitcoin?"), why Bitcoin is more valuable as a store of value than as a digital currency, how and why to introduce market economics to fee payment, node incentivization, etc. etc. Instead of a traditional wiki format, I'm a big fan of prezi.com's platform for visual explanations. In that format it could serve as a visual background for hundreds of easy-to-produce videos where someone is simply explaining a bite-sized point while using one small aspect of the larger Prezi wiki, effectively narrating it. I don't even know if core the developers and other "inner" influential community members have the right level of awareness, I've even asked Justus if he had the chance to talk with some of the aforementioned people about is proposal, this is Justus' answer: Secondly, as of your knowledge, are the core devs aware of this issue? Do they agree with the given definition of the problem? And more to the point, is there a plan to develop something along the line of your proposed solution? To the best of my knowledge, they are aware of it. None of them have commented, however. I don't know of any immediate, specific plans to implement this. Without such a visual communication medium and readily-producible videos, I think the explanatory load to actually educate the larger community about economics and such is just too much. The Nakamoto Institute is doing pretty well, but relying solely on essays is too slow in Bitcoin time for the permeation of understanding that it would be ideal to have.
I agree we need to accelerate the assimilation and the understanding of those concepts at least among those people that can influence bitcoin development trajectory.
|
|
|
Did you see the latest commit on pruning? Allows you to specify max disk usage and it prunes old blocks.. validates by reindexing which downloads all chain then prunes. You check it out? I wonder what the point is if youhave todownload the full chain anyway why prune? The main bottleneck is the lenghty sync time not storage space. I also noticed that it has logic to stop sending blocks requested that have been pruned. So there is assumption that there are full nodes out there to give you the block that others have pruned.
There's also a plan to let you keep certain block ranges and network protocol addition to advertise which parts a node has. That way, full block data can be kept in a distributed fashion while nodes can still run ( and contribute more meaningfully than just as a relay and utxo set provider) even with disk space limits. In fact with that it would be theoretically possible to run the network (fully trustable) without any node having storage space for the whole blockchain. Kinda like bittorrent ? This would just optimize the p2p layer of the protocol but need to ensure its safe from attacks maybe thats why we didnt do it to be on safe side. I think that more than Bittorrent you should look at something along the line of DHT. There's a very informative thread in the Development & Technical Discussion section of the forum... found: "Using a DHT to reduce the resource requirements of full nodes" by DeathAndTaxes https://bitcointalk.org/index.php?topic=662734.msg7470866#msg7470866Especially look at gmaxwell reply. Another really helpful source of info about the approach adopted in the prune implementation is this this 2014 thread on bitcoin core dev mailing list: "[Bitcoin-development] Service bits for pruned nodes" http://sourceforge.net/p/bitcoin/mailman/bitcoin-development/thread/CAPg%2BsBjSe23eADMxu-1mx0Kg2LGkN%2BBSNByq0PtZcMxAMh0uTg%40mail.gmail.com/#msg30779911
|
|
|
|