Bitcoin Forum
September 22, 2018, 10:04:15 PM *
News: ♦♦ New info! Bitcoin Core users absolutely must upgrade to previously-announced 0.16.3 [Torrent]. All Bitcoin users should temporarily trust confirmations slightly less. More info.
 
  Home Help Search Donate Login Register  
  Show Posts
Pages: « 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 [44] 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 ... 238 »
861  Bitcoin / Development & Technical Discussion / Re: help with finding issues in a modifcation of Bitcoin core on: May 21, 2015, 04:39:49 PM
Since there are people who are going to override this anyways I think its better for those users to use tested software than being forced to use their own fork.
I disagree. Most of the people who think doing this is okay are not technically competent enough to make the change, -- this is why making the change is hard for them. The result is that the software crashes for the (as we see in this thread), and they adopt a new approach-- sometimes one which is less uncivilized. To whatever extent that people do successfully do so, having more users of the software able to make modifications is a virtue. There are parts of the system where diversity is a hazard, but thats generally limited to the consensus parts.

Quote
you usually only get 20-30, these are available resources and using them is not something I consider to be wasting.
Thats only true when you are on colocation subnets which are already saturated with nodes. A more common numbers is 80+ and we've had instances of widespread fullness in the past (resulting in problems with nodes getting online). If you would bother to do simple arithmetic you would conclude that even if only 20 were typically used (which isn't the case), it would only take about 100 parties total running "connect to everyone" settings to completely saturate it.  Moreover, what you "consider" or not is irrelevant. It's other people's resources you would be consuming, usually for reasons which are highly adverse to their interests (e.g. spying on them and other users of Bitcoin)... beyond the spying activity, the second most common reason for wanting to do this is also ill advised (some people erroneously believe more connections improves block propagation, when in fact it too many slows it down).  
862  Bitcoin / Development & Technical Discussion / Re: help with finding issues in a modifcation of Bitcoin core on: May 21, 2015, 04:43:43 AM
Having to maintain and compile a separate fork means they have to run code that is less well tested than it could be especially since the core developers actively refuse to provide any assistance and encourage others to not help people making these modifications.
Yes, we refuse to help people who are objectively attacking the network, wasting other users resources and harming their privacy.  Beyond it being harmful to bitcoin and bitcoin's users, knowingly assisting in this activity might make a participant a target of torts from harmed parties or even subject to criminal prosecution in jurisdictions with expansive computer crime laws.

Why people think that its casually okay to attack the network, and think there is any chance that standard Bitcoin software would come with a user exposed switch to make a single node try to use a substantial fraction of the network's total capacity is beyond me. That isn't going to happen; and users should be highly wary of the competence or ethics anyone who ships software that does that (after all, if might makes right then why not also have the software backdoor your computer?-- if it's possible to do, it's okay, by the logic presented here, no?). The fact that someone don't have any ethical qualms about the resources of other people that they waste or the privacy harm most of these efforts are intended to create, nor do they have the most basic software engineering experience to understand the requirements and ramifications of a software change; doesn't create any obligation on my part to compromise my own integrity and aid in these activities.

And sure, sufficiently software-competent people can technically modify the software or write their own which behaves aggressively; but fewer people doing it is an improvement (less resources wasted) even if it is not possible to prevent it completely. This isn't the only thing that is done with respect to this kind of abuse, like in many things a layered response is important, the first is avoiding cases where thoughtful and ethical people do not accidentally abuse the network-- and making it clear what behavior is abusive--, then social, technical, and legal measures can be employed against those who remain. (All of which are continually being worked on by many people).
863  Bitcoin / Development & Technical Discussion / Re: Can dynamic accumulators be used to store UTXOs on: May 20, 2015, 10:42:30 PM
Deletion is possible in this scheme.  It requires running the extended gcd algorithm (should be cheap) and two exponentiations for every participant to update their witnesses.  Deleting from the accumulator means just replacing it by the witness.
I'll have to work through it then. I wasn't aware of how one could remove an n-th root without knowing the group order, which seemed to be what prior schemes required. Hm. Maybe the fact that the value being removed can be public is what makes it work.

Quote
Yes, this would be using Merkle trees, right?  But if my math is correct, the witness proofs it requires are larger than the transactions.  I'm not sure if ordering the tree is necessary (and you would then need to rebalance it) or do you just propose to put the elements in the order they are inserted?  Instead of zeroing out the transactions one could also put the new outputs in the freed spaces to avoid the trees from growing too deep or becoming unbalanced, but still the proofs are quite large.
A hashtree constructed out of 32 byte hashes with 4 billion entries would be 32 levels deep-- so the size compares very favorably with an RSA accumulator + witnesses with equivalent security.

A significant portion of the proofs are also the first couple levels of the tree, which could be cached to avoid retransmitting them-- and such caching could be negotiated on a peer to peer basis.   For example, if nodes cached the first 16 levels of the tree (2 MBs of data) a tree membership/update proof for a 2^32 tree would be 512 bytes.

No balancing is required, which is why I mentioned insertion ordered.  You can incrementally build an insertion ordered tree with log(n) memory (you just remember the leading edge of the tree, and as levels fill you ripple up to the root).  A search tree can be used instead, but it would require proofs for both outputs and inputs.

If you don't think 2^32 coins is enough, you can pick numbers arbitrarily large... e.g. 2^51 entries, with a 500MB cache, requires 928 byte proofs.  Because of the log factor  the cost is _essentially_ a constant in a physically limited world, so its just a question of if that constant is low enough.

Quote
I don't see why Merkle Trees are smaller than the RSA accumulator, but probably you assumed that deletion is not possible and you need a list of spent TXO.  I think you just need the accumulator (fixed size, about 400 bytes) and as witness proof another value with all inputs deleted per block.   Per transaction you also need the witness for all inputs (another 400 bytes), but this doesn't have to be stored.  With Merkle-Trees you need a single hash for the accumulator, but the witness proofs of the transactions need also be included to check the updates and merging them doesn't save much.  Even assuming a depth of only 24 (16 million UTXOs; we currently have more) you need 768 bytes for a single witness of a single transaction input assuming 256 bit hashes.
See my figures above.  The zerocoin membership witnesses are about 40kbytes in size for 80 bit security, so I may have been overestimating the size of the accumulator updates... but even if we assume that they're 400 bytes then that puts them into the same ballpark as the TXO proofs with top of the tree caching.  And, of course, verifying hundreds of thousands per second on a conventional CPU is not a big deal...

Quote
As you wrote, instead of RSA you just need a group of unknown order.  But "unknown order" means that it is infeasible to compute the order.  Are there any other group besides RSA with this property?  AFAIK for elliptic curves it is expensive but not infeasible to compute the order.
For EC the order computation is quartic in the size of the field (via the SEA algorithm) unless the curve has special structure which allows it to be computed faster; so it's not really possible to get a reasonably efficient EC group where the order is cryptographically infeasible to compute.  But class groups of imaginary quadratic orders are believed strong in this respect, but there hasn't been that much work or analysis for cryptosystems based on ideals; so I'm unsure of how useful they'd be... https://www.cdc.informatik.tu-darmstadt.de/reports/TR/TI-03-11.nfc_survey_03.pdf

Another way to address the trusted setup with the RSA schemes is to use a UFO, but this means you end up with a field tens of KB in size just to get 80 bit security.
864  Bitcoin / Development & Technical Discussion / Re: Reduce Block Size Limit on: May 20, 2015, 09:52:32 PM
There was a hard 256kb limit on maximum acceptable blocksize?  Are you sure about that?  I don't remember that.  Regardless, there's a significant difference in risk between increasing the block size limit and removing it.

There was a target on the size-- not a blockchain validation rule (this has created some confusion for people because they go look back at old discussions about the temporary target and how easy it would be to increase and thing it was about the blocksize limit);  but that was just local policy, by default miners running stock software wouldn't create blocks over 250k, but all nodes would happily accept larger blocks up to the validation rule limit. When that policy-target was up upped we saw a massive influx of things like unsolicited advertisement transactions, which also increased when it was increased further. The only actual limit on block sizes (beyond the message encoding behavior) has only ever been the million byte limit.

There is zero incentive for miners to not fill the blocks entirely; almost any non-zero fee would be sufficient.
There are physical limits and costs that would prevent this.  Each additional transaction increases the size of the block.  There are costs associated with increasing the size of a block.  At a minimum, there is a (very small) increase in the chance that the block will be orphaned.
The only _fundamental_ cost is communicating the discrepancy between the transactions included and the assumed included transactions.  This can be arbitrarily low, e.g. if miners delay a little to include only somewhat older well propagated transactions-- the cost then is not a question of "size" but in breaking rank with what other miners are doing (and, in fact, producing a smaller block would be more costly).

Even without optimal differential transmission, and only looking at techniques which are nearly _universally_ deployed by large miners today; with the relay network protocol the marginal cost of including an already relayed transaction is two bytes per transaction. I can no longer measure a correlation with block size and orphaning rate; though there was a substantial one a few years ago before newer technology mostly eliminated size related impact on orphaning.

Importantly, to whatever extent residual marginal cost exists these costs can be completely eliminated by consolidating the control of mining into larger pools. We saw people intentionally centralizing pooling as a response to orphaning already (two years ago) which prompted the creation of the block-relay-network/protocol to try to remove some of that centralization pressure by reducing the cost of block relay so there was less gain to lowering the cost by centralizing. Moreover, any funds being spent coping with these costs (e.g. paying for faster connectivity to the majority of the hash-power) cannot be funds spent on POW security.  So I would refine DumbFruit's argument to point out that it isn't that "fees would naturally be priced at zero" but that the equilibrium is one where there is only a single full node in the network (whos bandwidth costs the fees pay for) and no POW security, because the that is the most efficient configuration and there is no in system control or pressure against it, and no ability to empower the users to choose another outcome except via the definition of the system.  I believe this is essentially the point that he's making with "the most competitive configuration in a free market"-- even to the extent those costs exist at all they are minimized through maximal centralization.  This is why it is my believe that its essential that the cost of running a node be absolutely low and relatively insignificant compared to POW security, or otherwise centralizing is a dominant strategy for miners.

Quote
storage costs associated with holding the list of unconfirmed transactions in memory.
One does not need to store transactions in memory ever-- that Bitcoin Core currently does is just an engineering artifact and because there is currently no reason not to.  Technically a miner does not need to store a transaction they've verified in any way at all, beyond remembering that it successfully verified. (and remembering that something verified doesn't even need to be reliable.) Depending on people to not get around to writing more efficient software or forming more efficient (e.g. more centralized) institutions would be a weak protection indeed!

Quote
or should be controlled by market forces based on the the physical limitations and costs associated with increasing the block size.
Thats a problem when the physical limitations largely do not exist, and to the extent that they exist can be eliminated almost completely by configuring the ecosystem in a more centralized manner (and incrementally so, given an existing ecosystem with block relay related costs you can always mitigate those costs by centralizing a little bit more).

865  Bitcoin / Development & Technical Discussion / Re: Regtest Consensus Forking Behavior Introduced in Bitcoin Core in May 2014 on: May 20, 2015, 08:19:17 AM
Sorry if this is a stupid question. Why would you need to test with a low difficulty like this, while you could have a difficulty of 46 with the now worthless 330MH/s USB Block Erupter?
To test on some 'cloud' server that doesn't have the worthless usb block erupter. Smiley  Also because you want to crank through thousands of simulated blocks in a few minutes.

I personally think its of fairly marginal value (thus the mention of testing with mainnet) but not worthless.

Perhaps  I should start a collection effort for old asic miners for bitcoin software developers?  Smiley  There are actually USB miners with a lot more than 330MH/s which should be worthless-ish now. 
866  Bitcoin / Bitcoin Technical Support / Re: How long would it take me to complete Bitcoin client upgrade? on: May 20, 2015, 08:14:47 AM
it should not if your chain is not corrupted and you shut it down cleanly first. The databases are forward compatible.

If a reindex is taking 3 or 4 days your computer is very slow or there is something wrong with it. On a several year old desktop (3.2GHz quad core with SSD) I can download and fully sync in about 3 _hours_ with the current software; and reindexing is somewhat faster.
867  Bitcoin / Development & Technical Discussion / Re: Reduce Block Size Limit on: May 20, 2015, 08:11:22 AM
a socialist production quota that removes the incentive to improve the p2p network in order to avoid centralization.
Yes? and so?   The limited total number of coins is a socialist production quota used to create the scarcity needed for Bitcoin to function as a money like good.   The enforcement of digital signatures is a socialist constraint on the spend-ability of coins that makes possible something akin to ownership.  Decentralization is no less a fundamental defining characteristic of Bitcoin than limited supply or ownership of coins, -- that it must be protected shouldn't be up for debate anywhere; but reasonable people can easily disagree about the contours of trade-offs or the ramifications of decisions.

Following your general logic you suggest, nothing at all would be enforced miners could publish whatever they wanted-- and the system would be worthless.  Bitcoin is a system that has any value at all because it enforces rules against behavior that would otherwise be permitted by the laws of nature.

This isn't to say that all limits are proper or good or well calibrated; but you cannot take a principled stance against any and all limits in general and then speak reasonably about Bitcoin at all.
868  Bitcoin / Development & Technical Discussion / Re: Can dynamic accumulators be used to store UTXOs on: May 20, 2015, 06:07:21 AM
AFAIK These accumulator schemes require a spent coins list that grows forever; as they require a trusted party trapdoor to efficiently delete, so you would need to keep a linear database to prevent double spends. Or have you found a scheme with an efficient trustless delete?

RSA based accumulators also require trusted setup generally, violation of the trusted setup lets you make false membership proofs.  (The same protocols could be applied to non-trapdoor groups of unknown order; but the performance and security are much more questionable.)

So I'm not seeing how this can help with UTXO.

Though you actually do not need fancy number-theoretic cryptography: as has been proposed previously,  if the utxos are stored in an insertion ordered hash tree, appending to the end requires a log sized proof (just the leading edge of the tree), showing membership requires a log sized proof. Membership proofs can by updated cheaply by observing other updates that intersect with your proofs. And spending requires just zeroing out a member which can be done with the same proof path as was used to show membership.

So you can imagine a world with stateless miners and full nodes, and wallets either tracking their own coin proofs or outsourcing that work to archive nodes that help them form the proofs needed for their transactions.

If you go out and compute the concrete sizes for the above scheme you'll find two things: the results even for gigantic databases are _much_ smaller than RSA accumulator approaches even though it scales with a log() due to constant factors (and of course its must less cpu intensive too), but the bandwidth required for processing transactions/blocks is increased massively.  So it isn't clear that it would actually be a win as bandwidth tends to be the more scarce and slower growing resource.
869  Bitcoin / Development & Technical Discussion / Re: Regtest Consensus Forking Behavior Introduced in Bitcoin Core in May 2014 on: May 19, 2015, 10:45:34 PM
I have to admit I'm disappointed that the answer is basically that Bitcoin Core doesn't feel like regression and simulation testing is important enough to warrant proper retarget behavior for it, but I appreciate the response nonetheless.
Testing is very important, but adding additional code to accommodate makes the simulation even _less_ faithful, and more likely to miss real issues (or potentially even introduce real issues).  I do regression testing with a ASIC miner and the main network code (with checkpoints=0, of course). Testing with the actual production-time behavior is the gold standard and cannot be replaced with shortcutted version without compromise.  (FWIW, testnet which, in spite of its own stupid shortcuts, is somewhat closer to Bitcoin has test cases in the chain for adjustment extremes).

The only reason you were able to make this comment at all is because regtest exists, and the only reason regtest mode exists is because it was specifically created for the block tester harness that runs on externally hosted ci setup, e.g. its intended use. For a long time the harness applied a patch to change the behavior to make it computationally cheaper to test-- at the expense of making them less accurate and faithful--, but maintaining the patch externally took work.

Use of it has expanded since then--  I think today a somewhat different approach would make more sense for the regtest shortcutting and would result in a smaller divergence from the normal network behavior.  (e.g. when in testing mode, mask out the highest bits of the block hashes before the target check).  So quite the opposite, testing is important enough that one should actually be testing the actual Bitcoin network code and not a altcoinified mockup that makes testing easier, or to the extent a modified version for testability is used great care should be taken to minimize the number of differences (and  to not add risk to production code).  How you could extract "testing is not important" from my comments about a whole alternative network mode created specifically for testing is beyond me-- specifically given the amount of effort I put in previously in convincing you that agreement testing was essential.



870  Bitcoin / Development & Technical Discussion / Re: Regtest Consensus Forking Behavior Introduced in Bitcoin Core in May 2014 on: May 19, 2015, 05:36:14 PM
This was actually pointed out at the time the change was made; but it seemed silly to change regtest's minimum just to make it fit (or worse, to add a lot of additional complexity and another number type just to handle values which _cannot_ occur in Bitcoin). Somewhat similar to how the testnet 20-minute rule exposes weird behavior where if the penultimate block in the retargeting window is diff-1 the difficulty will just from whatever back to ~1: The tests intentionally break the system in order to make it easier to test, sometimes that has collateral damage; needless generality exposes its own risks-- e.g. the OpenSSL bignum code was wrong in a platform dependent way until fairly recently (and irritatingly, we spent months with that discovery embargoed)-- any use of it carries its own risks.
871  Bitcoin / Development & Technical Discussion / Re: help with finding issues in a modifcation of Bitcoin core on: May 17, 2015, 03:13:56 AM
Achow101, you are basically asking for help to abuse the network.  Other people are not providing connectivity to you so that you can aggressively use up their resources and monitor their activity. Worse-- you are aggressively connecting to hosts without understanding software engineering well enough to avoid your own system failing-- how do you know that you're not also causing harm to those other hosts?  I hope that other people who understand the nature of the failure you're experiencing will also choose to not assist, sorry.  Please reconsider what you are doing.

872  Bitcoin / Development & Technical Discussion / Re: how to maximize block download speed on a single local node on: May 16, 2015, 10:05:50 AM
There is an intentional sleep in message handling.

It was removed in 0.11 (git master right now) via Patrick Strateman PR 5971.

There are excruciatingly precise building instructions, https://github.com/bitcoin/bitcoin/blob/master/doc/gitian-building.md  sufficient to get someone who isn't terribly technical producing bit identical binaries to the release.

(A simpler build process works too, of course, if you don't care about reproducing the builds-- but those instructions also have the benefit of being very precise.)
873  Bitcoin / Development & Technical Discussion / Re: Should we just remove the wallet function of Bitcoin Core on: May 11, 2015, 12:21:29 AM
Just see another victim of the 100-address trap in the Chinese subforum. 10BTC lost. Comments in the thread say the best option for noobs is to use a centralized bitcoin bank. I just feel speechless and don't want to argue with them

Should we just remove the wallet function of Bitcoin Core if HD is not (read: will never be) implemented? If people want to be a full node, they can use Armory. Otherwise, they can use Electrum.

Actually, wallets without deterministic backup (e.g. Bitcoin Core) should not be recommended in bitcoin.org

jl2012-- Your thread title and premise is needlessly antagonistic; and came close to just being ignored by me completely.

Lets be clear about this.  In 2011 I proposed deterministic wallets. Other contributors in Bitcoin Core vigorously opposed them because they make backups a liability and undermine good key management practices,

Subsequently, I came up with the homomorpic public derivation, Pieter later formalized it in (and the non-publicly generatable scheme as well) in BIP32.  However, we believed that the the cryptography was too new and conjectural to rush out and deploy. Other wallets have done so, and some have done so in ways (e.g. only supporting the public derivation, and always using it; while simultaneously supporting key export) which has resulted in funds loss too.  Conservationism here is justified especially so in that one can simply set the key-pool to an arbitrarily high value and obtain most of the values of the fancier schemes; without many of of their costs.

Today the people opposed to it have been convinced or at least worn down. There is an implementation of BIP32 in the Bitcoin Core codebase.  Everyone active on the project has been working primarily just keeping the system afloat; there are very few substantial contributors, and I have no lately seen any work from you on this.    Suggesting removing a huge swath of useful, widely used, and generally reliable software because some functionality is not (yet!) incorporated is an insult.
874  Bitcoin / Development & Technical Discussion / Re: WTF is this? Someone found a trick for fast mining? on: May 11, 2015, 12:01:25 AM
Indeed, yes it is the case that the work first issued right after a new block often has no transactions, since createnewblock can create a couple hundred milliseconds--- the time isn't actually the verification, but just the time it takes to build a candidate block... Since 2013 or so many pool programs will generate a long poll event (or equivalent) to cause updates so that miners will move to new transaction-containing work even before finishing their current work unit; but they will spent some time on the empty work.
875  Bitcoin / Development & Technical Discussion / Re: WTF is this? Someone found a trick for fast mining? on: May 10, 2015, 09:00:07 PM
Mean blocktime validation of one transaction blocks: 
[...]
Shorter validation time for one transaction blocks is expected for miners

It is unclear to me what you are talking about. What specifically are you referring to by "blocktime validation"?  Are you talking about the ntime gaps?  Blocks of very slow miners should have lower timestamps because they do not frequently update their midstate (e.g. they would claim older times because that was when they started); modern fast miners blow through the range quickly, and thus have plenty of opportunities to increment their time. (Much of the single tx blocks back then were believed to be a botnet that verified nothing).

If you are saying smaller blocks take less time to validate largely. This is mostly untrue at the tip of the chain.  Leave your node running for 24-48 hours and then look at the block verification times. You'll see that the actual blocks, in spite of being huge, typically verify in a few milliseconds (-benchmark will enable ms resolution timing results in the logs). This is because almost all the verification is cached from the transactions being relayed earlier.
876  Bitcoin / Development & Technical Discussion / Re: How is UTXO currently stored in Bitcoin Core? on: May 09, 2015, 08:29:17 PM
However, 35.86 bytes is merely enough for storing the size of scriptPubKey (1 bytes) + std P2PKH scriptPubKey (25 bytes) + value (8 bytes)
So where is the data for the 36 bytes outpoint (txid + index)?
The representation is compressed.  The txid, height, version, and coinbase flag are _shared_ among all outputs for a transaction. The value and index are stored in variable length compressed representations, the scriptpubkey is stored in a templitized compressed representation (e.g. only encodes the the type and the  hash for p2sh/p2pkh outputs).

See the diagram at the top of coins.h.
877  Bitcoin / Development & Technical Discussion / Re: Max block size should also consider the size of UTXO set on: May 09, 2015, 08:08:26 PM
block size => bandwidth and storage cost
utxo size => RAM cost
No the UTXO isn't in ram, it's just more storage; it's more costly because every verifier must have it online where as the majority of nodes can forget all the rest once its burred; and certainly doesn't need to access it. I'd suggest thinking about it as storage which is necessarily online for the UTXO and storage which is mostly offline/nearline for the history, once you're past the most recent couple hundred blocks or so.
Quote
sigops => CPU cost
However, my impression is that the CPU cost is negligible comparing with bandwidth, storage and RAM and the sigop limit is there just as an anti-DOS mechanism
It's not exactly negligible; especially when one considers the cost of catching up-- there its the dominating cost currently for many people due to sig operations syncing the chain only runs at about 11Mbit/sec or so on a 3.2GHz quad core... (Well, libsecp256k1 will make this considerably faster-- but still slower than high speed consumer broadband that is widely available)  It only doesn't impact block propagation because virtually all signatures are cached from the initial txn relay now.

I don't really think it needs to be adjusted often (or hopefully ever); in that it can be set pretty conservatively and the most important point is getting a situation where "I will pay less fees if I do bar, so if I'm otherwise neutral I should choose to do so."-- partially this is because you're right that it's mostly an anti-dos mechanism.
878  Bitcoin / Development & Technical Discussion / Re: Effect of the Distribution of Block Interarrival Time On Blockchain Security on: May 09, 2015, 07:55:11 PM
Please let me know your thoughts.
The assumption we've historically made is that there would be a generally be a backlog of fee paying transactions in excess of the blocksize limit.

I believe this would suppress the effect your work anticipates, do you agree?

As far as the impact: You must consider the attacker's success distribution and not just its expectation; the network the network is less likely to get unlucky is also less likely to get lucky, increasing the low probability success rate even if the expectation is down. Attacker utility is not a linear function; that an attacker would lose money on average but potentially win big with low chances isn't a great comfort. Smiley

You must also consider the hashrate lost to forking-- otherwise your same argument would apply generally and conclude security goes up as variance goes down as a rule; which can be easily demonstrated to be untrue (in theory, and -- thanks to some rather inadvisable constructed altcoins like "liquidcoin", in practice).

Consider an extreme hypothetical where there are few txn but they pay fees far more than the expected electrical power for the whole network to find a block. The hashate goes from 0 to 100% the moment a transaction appears; and so a block would probably be found near instantly, lets assume instantly.  What would then happen is that all the honest miners would end up on separate forks-- effectively diluting their hashpower. An attacker conspiracy is now no longer in competition with the whole network, but just the strongest fork (which if all honest miners are equal in power and the block find was actually instant, he'd be in competition with just a single miner until the fork resolves!).  (In this extreme example the network would eventually never converge, in fact, even absent an attacker; with less extreme examples it's not as bad but you still might need to wait many blocks to be confident there wouldn't be a long reorg; due to a long chain of blocks being found 'concurrently' (within the communications diameter of the network)).

The observation of the spare power in that situation is a good one (but again, also addressed by the backlog).
879  Bitcoin / Development & Technical Discussion / Re: Max block size should also consider the size of UTXO set on: May 09, 2015, 07:28:16 PM
So your idea is to replace the MAX_BLOCK_SIZE with a single composite score of block size, delta utxo size, and something else?
Yes (something else would be the sigops limit; though sigops actually have a kind of quadratic cost, due to the cost of rehashing the transaction and all its inputs; but all this only need to be very roughly correct in order to get the behavioral incentives (don't use frivolous sigops!) right).
880  Bitcoin / Development & Technical Discussion / Re: Max block size should also consider the size of UTXO set on: May 09, 2015, 07:02:09 PM
since unspendable outputs should not be entered into the UTXO set.
Unless P == NP, unspendability is undecidable in the general case... someone can always write a script whos spendability can't be decided without insanely large work, if they want. (e.g. push zeros, OP_SHA1 OP_EQUALVERIFY ... does the all zeros sha1 have a preimage?)

So once you're depending on their cooperation, defining a single known unspendable kind seems reasonable. Someone who wants to bloat the utxo set with an unspendable output always can; unfortunately.

Using that system, the entire UTXO set would be 16 bytes per entry for the key and value.
Thats not much below the current system. Value and outpoint index are compressed and just take a couple bytes, the scriptpubkey is template compressed and takes 20 bytes for most scriptpubkeys. The version, height, coinbase flag, and txid is shared among all outputs with the same id. Smiley

Quote
It wouldn't even be required to store the entire hash.  A node could salt the hashes and collisions would be very unlikely.
I'd previously suggested something similar (using a permutation) to encrypt the utxo data locally; to reduce problems with virus data in the UTXO triggering AV and the risk of committing a strict liability crime storing someone elses data in the UTXO set.  Especially considering how small a compact encoding is, I'm not sure that doing something where one must repeat the scriptpubkey across the network (to provide the preimage) is a real win. Bandwidth seems likely to be in shorter supply than even fast storage.

I don't think collisions are even harmful; so long as they're unpredictable by an attacker.  Well I suppose I could just flood you with invalid transactions knowing that after enough one would collide for sure and you'd allow it. Then you'll go mine a bad block and fork yourself off-- but that scales exponentially with the data size, it's a multiway second preimage problem. So even a 64 bit value is pretty hard to hit. Tricky, against 10M UTXO, with a 64bit hash, the expected number of bad transactions to get a hit is 10,000 bad transactions per second for about 2135 days.  So long as you're sending the preimages though the amount stored doesn't need to be normative; increasing the size to 72 bits makes it look quite a bit less feasible.

I wrote this long time ago when I knew Bitcoin not very well. But I always think this has to be done if we increase the max block size.

I would like to rewrite it as follow:

We will formally define "utxoSize", which is

txid (32 bytes) + txoIndex (varInt) + scriptPubKey length (varInt) + scriptPubKey (? bytes) + value (8 byte) + coinbase (1 byte) + coinbase block height (0 or 4 bytes)

This is much larger than the encoding we currently use.

Quote
Code:
utxoDiff = (total utxoSize of new UTXOs) - (total utxoSize of spent UTXOs)
With utxoDiff, we may have a soft-fork rule of one of the followings:
  • 1. Put a hardcoded cap to utxoDiff for each block. Therefore, we will be sure that the UTXO set won't grow too fast; or
  • 2. If a block has a small (or even negative) utxoDiff, a higher MAX_BLOCK_SIZE is allowed
That will encourage miners to accept txs with small or negative utxoDiff, which will keep the UTXO set small.

Having multiple limits makes the linear programming problem of deciding which transactions to include much harder (both computationally and algorithmically); it also means that a wallet cannot precisely set a fees per byte based on the content of the transaction without knowing the structure of all the other transactions in the mempool.

Did you see my proposal where I replace size with a single augmented 'cost' that is a weighed sum of the relevant costly thing we'd hope to limit?  (this wouldn't prevent having worse case limits in any of the dimensions too; if really needed).


Pages: « 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 [44] 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 ... 238 »
Sponsored by , a Bitcoin-accepting VPN.
Powered by MySQL Powered by PHP Powered by SMF 1.1.19 | SMF © 2006-2009, Simple Machines Valid XHTML 1.0! Valid CSS!