Bitcoin Forum
May 24, 2024, 05:12:08 PM *
News: Latest Bitcoin Core release: 27.0 [Torrent]
 
  Home Help Search Login Register More  
  Show Posts
Pages: « 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 [51] 52 53 54 55 56 57 58 59 60 61 62 »
1001  Bitcoin / Development & Technical Discussion / Re: Applying Ripple Consensus model in Bitcoin on: February 15, 2013, 08:13:20 AM
In case you do not aware, checkpoints (https://github.com/bitcoin/bitcoin/blob/master/src/checkpoints.cpp) are currently applied to Satoshi client. This means no matter how much hashing power one has, he will create a hard fork by trying to replace blocks before the last checkpoint. Therefore, bitcoin is never a pure PoW network.

Currently, checkpoints are implemented for about every thousands of blocks, and it is solely determined by the devs. So theoretically, the devs could hardcode every single new blocks as checkpoints, making themselves as the central bitcoin clearinghouse, working like SolidCoin. My proposal is just trying to decentralise the process of checkpointing. If 6 blocks is considered too short, we may make it every 100 blocks, but definitely not thousands of blocks. 100 blocks checkpoint is not too bad, at least I am sure that the 10 million USD worth of BTC that I received in the morning will become completely non-chargebackable in the evening.

The reason why checkpoints are included in the reference client isn't to define what the valid blockchain is. They are included primarily so that during your initial synchronization with the network, when your client has no idea what the valid blockchain is, someone can't create an alternate blockchain from scratch with a lower difficulty, but equally long history. Remember that for much of Bitcoin's history difficulty was so low that an attacker would have a relatively easy time creating a chain with a higher total work up to that point. With checkpoints if your client happens to initially connect to evil nodes broadcasting a fake history, it will eventually hit a checkpoint, see that the hashes don't match, and at worst simply fail to sync up until it finds an honest node to connect to. For instance I run the only testnet DNS seed, (new in 0.8 ) and if the IRC channels also used for seeding were down I could easily make clients believe any chain they want by mining my own chain and advertising only evil nodes I controlled. (testnet has only one checkpoint, just 500 blocks in) Of course, testnet BTC are worthless, so I'd accomplish nothing more than annoying people.

The second thing checkpoints do is they remove the requirement to actually do ECC verification for transactions in any block prior to the last checkpoint. Basically if a transaction is in a block protected by a checkpoint, you can be sure that the transaction is valid, thus you can skip checking the signatures and instead just check the transaction and block hashes to make sure you have the correct data, and update the unspent transaction output database. This significantly speeds up initial synchronization on slow machines.

The checkpoints may be chosen solely by the devs, but it would be significantly harder to sneak in malicious checkpoints than it would be to steal your coins any other way. They're just a list in src/checkpoints.cpp and any change to that list gets reviewed by lots of people. For instance I'm not a developer, but I personally made a point of reviewing Gavin's new checkpoint proposed for 0.8, and lots of people on IRC reviewed the change as well.

You can always turn checkpoints off by using the -checkpoints=0 flag. If for some reason there was a re-org event large enough that thousands of blocks were re-organized disabling checkpoints and using the new, longer, re-orged chain might be a perfectly reasonable thing to do, although trust in Bitcoin at that point would probably be so shaken it might not matter much.
1002  Economy / Service Discussion / Re: Blockchain.info coin mixer TOTAL RIPOFF on: February 13, 2013, 04:41:12 PM
I am not adverse to lowering the fees. Those of you who feel a 1.5% fee is too high would you use the service at a 0.5% fee?

I'll also chime in and say I think the 1.5% fee is fine. The speed at which you send unconfirmed funds, even for large amounts, is remarkable and as I know very well involves quite a lot of risk. Myself I'm happy to pay for that convenience and use the mixer (really, coin-swapper) all the time. If anything the main thing I want to see is more control, like the ability to set the tx fee for the mixer output, (outputs often take far too long to confirm due to their low priority) and the ability to chose the number of outputs that will be used.
1003  Economy / Service Discussion / Re: coinbase.com sending unconfirmed? Huge risk? on: February 13, 2013, 04:33:08 PM
Wow! It surprises me how many people are willing to accept a long chain of unconfirmed transactions. Also, there are some decent fees there for a forward looking miner to pick up.  If this happens occasionally, I'd think there'd be some incentive for someone to code one.

Luke-Jr has a pull request called child-pays-for-parent that does this. His Eligius pool runs with this patch applied, although Eligius is relatively small and mines roughly a block a day.
1004  Economy / Service Discussion / Re: coinbase.com sending unconfirmed? Huge risk? on: February 13, 2013, 07:35:10 AM
FWIW one way this may have happened is if they accepted one (or possibly two) confirmations as sufficient to re-use the txout to make another transaction, and then the block that confirmed the tx was orphaned and that tx was not picked up by another miner. The reference Bitcoin client works this way.

Not quite as negligent, but they still should really be waiting at least two confirmations, preferably 3 or more, given they are handling other peoples' money.
1005  Bitcoin / Development & Technical Discussion / Re: Ultimate blockchain compression w/ trust-free lite nodes on: February 13, 2013, 05:00:19 AM
Quote
In a radix tree what you would do is first add each transaction to the tree, keeping track of the set of all modified nodes. Then once you had finished the update you would recalculate the hashes for each changed node, deepest first, in a single O(n) operation.

O(NLog(N)) ?

Each node may cause an update to a full path to the bottom of the tree.

No actually. Every operation in a radix tree takes O(k) time, where k is the maximum length of the key in the set. Since Bitcoin transactions have a fixed key length of 256 bits, that's O(1) time. Additionally since the number of intermediate nodes created for a transaction in the tree can't be more than k, a radix tree is O(k*n) space; again O(n) space for Bitcoin.

I mean, it's not so much that you are wrong, it's just that the log2(n) part is bounded by a fixed small number so it really is appropriate to just say O(n), and as I explained, updating a batch of n transactions, especially given that n << N (where N is the total size of the txout set) is an efficient operation. Note that the n in O(n) is the number of new transactions, not the number of existing ones.

It is 2^n work, not n^2, I assume a typo?

Oops, good catch.

Quote
Yeah, until this is a hard and fast network rule you'll have to check that the hashing power devoted to these UTXO proofs is sufficient.

I have to look up the details of merged mining, but I think you get the same protection as the main chain (at least for the blocks where it works)?

For determining if a block in an alt-chain can be reversed you could be correct under some rules, but in this case each valid PoW is really more like a vote that the UTXO set mined by that PoW is valid. Thus the protection is only the hashing power that mined the blocks containing the PoW's.

Quote
Speaking of... we should have every one of these UTXO things have a reference to the previous valid UTXO set.

Absolutely.  I assumed that was the plan.

You'd hope so, but I'll have to admit I somehow didn't realize that until today.

Right, a general way to start new chains would be a good idea.  There would need to be some way to "register" a genesis hash and then that chain would only be used for 1 purpose.

The key is keeping agreement on the rules for the parallel chains.

It's tricky though, because the PoW can mean totally different things for different types of chains. Not to mention how for any application but timestamping you need to write a whole set of chain rules too.

That said, following a single merge-mining standard is a good thing; I only proposed this new one because as far as I know namecoin is the only one using the existing multi-chain standard, and that standard sucks. However:




Quote
scriptSig: <32-byte digest>
scriptPubKey:

Are transactions with 0 in 0 out allowed under the spec?

They sure are! The only thing a tx needs is one or more txin's and one or more txouts. Both scriptSigs and scriptPubKeys are allowed to be empty, and the value can be empty. (although you can't spend an empty scriptSig with an empty scriptPubKey; something needs to push a true value to the stack)

So, this basically creates a chain that has been stamped over and over with the output of one tx being the input to the next one.

What happens if you have something like

Root -> none -> none -> Valid1 (from root) -> Valid2 (from valid1) -> Invalid1 (from valid2) -> stamped (from Invalid2) -> ....

Not quite. The txin is just there because a Bitcoin transaction is only valid if it has a txin; what is actually in the txin is totally irrelevant. The link to the previous "considered good" block has to be within the UTXO header and that nLockTime would best be used only as an auxillary bit of data to allow nodes to reproduce the UTXO chain block header after they deterministicly compute what the state of the UTXO tree would have been with that blocks transactions included. It's just a way of avoiding the pain of implementing the P2P network that really should be holding that data, and getting something working sooner. It's a solution that uniquely applies to a UTXO merge-mined alt-chain; no other type of chain would allow a trick like that.
1006  Bitcoin / Development & Technical Discussion / Re: Ultimate blockchain compression w/ trust-free lite nodes on: February 13, 2013, 01:11:04 AM
This requires the tree to be completely regenerated for each block, since the hash of every transaction would change.

Read the rest of the message - that's just a toy example to illustrate the general concept. My worked example does not require the whole tree to be recalculated nor does it require the block number for lookup.

The attack on the system I suggested would require having lots of transactions that has the same initial bits in their hash.

If you had 1 million of them and added them to the tree, then it would become quite deep.  Each time you add one, the nodes would have to recalculate the entire path from root to that node and redo all the hashing.  The effort per node would be proportional to N squared, where N is the number of collisions.

I think your misunderstanding stems from the idea that the datastructure and the authenticating hash are the same thing. They aren't, they're quite separate. The authenticating hash is an addition to the datastructure, and can be calculated independently.

Take your example of adding a million transactions, which is unrealistic anyway as a 1MiB block can't have more than a few thousand. In a radix tree what you would do is first add each transaction to the tree, keeping track of the set of all modified nodes. Then once you had finished the update you would recalculate the hashes for each changed node, deepest first, in a single O(n) operation.

A real Bitcoin node would basically have a limit on how much CPU time it was willing to use on adding transactions to the UTXO set, and collect incoming transactions into batches small enough to stay under that limit. Given we're only talking tens of updates/second chances are this optimization wouldn't even be required anyway.


Anyway, thinking about this a bit more, on reflection you are right and I think this unbalanced tree stuff is a total non-issue in any radix tree or similar structure where depth is a function of common-prefix length. For the 1.6million tx's in the UTXO set you would expect a tree 20 branches deep. On the other hand to find a transaction with n bits in common with another transaction requires n^2 work, so right there you aren't going to see depths more than 64 or so even with highly unrealistic amounts of effort thrown at the problem. (bitcoin has done a 66bit zero-prefix hash IIRC)

The issue with long depths is purely proof size, and 64*n isn't much worse than 20*n, so as you suggest, why make things complicated?

Too bad, it was a clever idea. Tongue


Quote
If you really need to prove a tx was in the UTXO set as of the current best block, either wait for another block to be generated, or prove that it isn't in the UTXO set of the previous block, and prove that it is in the merkle tree of the current best block.

You will probably have to do something like that anyway.  The plan was to merge mine, so some bitcoin blocks won't have matching tree root node hashes.  So, prove it wasn't spent up to 10 blocks previous and then check the last 10 blocks manually.

Yeah, until this is a hard and fast network rule you'll have to check that the hashing power devoted to these UTXO proofs is sufficient.

Speaking of... we should have every one of these UTXO things have a reference to the previous valid UTXO set. This would turn the whole shebang into a proper chain and thus allow clients to figure out both which is the longest chain, and how much hashing power is being devoted to it.

Equally if we think in terms of merge-mining a chain, adding support for additional UTXO indexes, such as known scriptPubKeys and so forth, is just a matter of adding additional chains to be merge mined, and UTXO-only and UTXO+scriptPubKey can co-exist just fine in this scenario. A disadvantage is we'll need to add some stuff to the P2P network, but I have another idea...

So right now, merge-mined chains have the problem that the proof of work goes through the coinbase transaction. The issue here is that to prove a path to the proof of work, you need the whole coinbase transaction, and it can be quite large, for instance in the case of P2Pool or Eligius. So I'm proposing a new standard, where one transaction of the following form is used to include an additional merge-mined digest, and that transaction will always contain exactly one txin, and one txout of value zero using the following:

scriptSig: <32-byte digest>
scriptPubKey:

Any digest matching this form will be assumed to represent the miner's hashing power, thus miners should not allow such transactions into their blocks blindly. They are currently non-standard, so this will not happen by default, and the scriptSig has no legitimate use. The scriptPubKey is spendable by the scriptSig, so for the txin miners would usually use the txout created by a previous block following this standard. If none are available the miner can insert an additional zero-fee transaction creating a suitable txout (of zero value!) in the block.

The digest would of course represent the tip of a merkle tree. Every merge mined digest in that tree will have a zero byte appended, and then the digests will be hashed together. What would go in the alt-chain is then the merkle path, that is every leaf digest required to get to the transaction, and what side the leafs were on. Note how this is different from the merge-mining standard currently used by namecoin, and fixes the issues it has with conflicting slot numbers.

Appending the zero byte is critical, because that action means to verify the an alt-chain block hash was legitimately merge-mined you simply check that every other digest in the path to the transaction has exactly 32 bytes, and that the transaction itself follows the above form. Note this also means the miner has the flexibility to use something other than a merkle tree to combine the alt-chain block hashes if they want; alt-chains should put reasonable limits of PoW size.

Alt-chains should also still support the same idea in the coinbase, for miners that don't plan on making large coinbase transactions. (the majority)

Now, the useful part, is we can easily add more than one of these transactions, and distinguish them by different sequence numbers in the txin. You would do that for the UTXO set, with one value of defined for just the UTXO set itself, another for a scriptPubKey index, and whatever else gets added as the system is improved. The previous block where this miner considered the merge-mined UTXO digest to be valid can be specified with nLockTime. The advantage of this idea is that because the final UTXO digests are completely deterministic we don't need to build a new P2P network to pass around the digests in the merkle path to the coinbase.

This also shows the advantage of using a separate transaction, because it keeps the UTXO proof size to a minimum, important for applications like fidelity bonded banks where UTXO proofs would become part of fraud proofs; we need to keep proof size to a minimum. Equally being able to prove the hashing power devoted to the UTXO set merge-mine chain is useful, and the only way to do that is provide a bunch of recent UTXO block headers and associated PoW's. Again, keeping the size down for this is desirable as the SPV node examining those headers may be on a low-bandwidth connection.
1007  Bitcoin / Development & Technical Discussion / Re: Ultimate blockchain compression w/ trust-free lite nodes on: February 12, 2013, 09:05:39 PM
Actually, here is an idea that could work: rather than making the generated UTXO set be the state of the UTXO tree for the current block, make it the state of the UTXO tree for the previous block. Then for the purposes of putting the tx in the tree, essentially define the tx hash not as H(d) but as H(b | d) with b equal to the hash of the block where the tx was confirmed. This works because finding a block hash is difficult, and once you have found any valid block hash not using it represents a huge financial hit.

Of course, this doesn't actually work directly, because you usually don't know the block number when a txout was created. So we'll actually do something a bit different:

For every node in your radix tree, store the block number of the tree was last changed. The 0th node, the empty prefix, gets block 0. For the purpose of the radix tree, define the radix hash of the transaction, rh, as rh=H(hn | h) where hn is the hash of that block number, and h is the transaction's actual hash. Ignoring the fact that the transaction hash prefix changed, as can treat the rest of the bytes of the radix tx hash normally to determine which prefix edge is matched, and thus what node to follow down the tree.

Other than having to provide the block hash, proving that a transaction is in the UTXO set is not really any different than before. the proof is still a merkle path, and the security is still based on the infeasibility of reversing a hash function. I think it would be a good idea to add in some of the additional ideas I outlined in https://bitcointalk.org/index.php?topic=137933.msg1470730#msg1470730, but that is true of any UTXO idea.

If you really need to prove a tx was in the UTXO set as of the current best block, either wait for another block to be generated, or prove that it isn't in the UTXO set of the previous block, and prove that it is in the merkle tree of the current best block.
1008  Bitcoin / Development & Technical Discussion / Re: Ultimate blockchain compression w/ trust-free lite nodes on: February 12, 2013, 08:20:15 PM
You can't make that assumption because an attacker might create a bunch of transactions by brute force search that just happen to create an unbalanced tree.

It would only affect the transactions in question.  Also, creating unbalancing transactions requires generating hashes with the same starts, which is hard to do.

There is nothing else in existence that has put as much towards finding statistically unlikely hashes as Bitcoin has! Heck, the Bitcoin network has found hashes with IIRC 66 zero bits.

You just can-not assume that hashes calculated directly from data that an attacker has any control of will be uniformly distributed under any circumstance. Someone will get some GPUs together and write a program to find the right hashes to unbalance your tree just because they can.
1009  Bitcoin / Development & Technical Discussion / Re: Purchasing fidelity bonds by provably throwing away bitcoins on: February 12, 2013, 07:59:43 PM
Just to jump in and agree with Mike; I would guess the Foundation would prefer not to participate in this as well; you only need to get named in one suit before it just isn't worth it.

Absolutely. After all, they're called "bonds"; it is quite conceivable a judge could be convinced that means the money should be returned to the users in the event of fraud by the entity that purchased the bond if the recipient is well-known.

In this case, though, there might be a business case to be made for an escrow shop or business to do bonded verification. It depends how 'distributed' you want to make things. On balance, I'm not sure why paying miners directly is a bad idea, unless you can work a decentralized scheme where the fidelity bond buyer gets paid back if there are no claims.

In the case of fidelity-bonded banks a reasonable thing to do would be to setup a service that runs a the same sort of trusted hardware cryptographic co-processor with remote attestation the banks themselves would use and have that hardware generate secure fidelity bond deposit addresses. The program running on the hardware would then evaluate any fraud proofs produced by defrauded users, and if the proof was valid refund the user from the deposit address. Equally it could just be a trusted company with some lawyers.

It's not the right solution for every purpose, but the underlying technology has the flexibility to be used this way.
1010  Bitcoin / Development & Technical Discussion / Re: Ultimate blockchain compression w/ trust-free lite nodes on: February 12, 2013, 07:46:25 PM
Have you considered that since the hashes are well distributed, you could mostly skip tree balancing.

You can't make that assumption because an attacker might create a bunch of transactions by brute force search that just happen to create an unbalanced tree. If you have a system where hash values determine the balance of the tree, you have no choice but to have some sort of way to measure the amount the tree is out of balance, which might be difficult if not all nodes know the full state of the tree, and prohibit the creation of unbalanced trees. You also need to be careful to ensure that if a tree is at the threshold just prior to being too out-of-balance to be legal there are legal operations that make the tree balanced again so honest miners can fix the problem. Finally fixing an unbalanced tree has to always be cheap.
1011  Bitcoin / Development & Technical Discussion / Re: Creating Bitcoin passports using sacrifices on: February 11, 2013, 02:31:51 PM
Quote
Is this really an issue? Well, I think the the question really is why would you use a sacrifice method where the value is so sensitive to so many variables? In particularly, a method where the actual cost of the sacrifice is inversely and exponentially dependent on the size of the largest miner.

You said yourself, the orphan rate is not only measurable but also quite low. I'm not sure it's really that complicated.

Re-read what I wrote. A low orphan rate is what makes the attack possible. Assuming it is always zero is the conservative way of estimating sacrifice value. More to the point, the real issue is the sensitivity to mining power; try experimenting with plotting the cost vs. hashing power curve. It's a very, very fast decline to zero as you get close to even just 20%, whereas tx-in-tx is just a straight linear line from 0% to 100% and doesn't require any analysis.

And again, it only takes a small orphan rate to mess up multiple tx schemes by dramatically increasing the difficulty of getting a valid set of consecutive transactions. You said it yourself that miners don't appear to always be rational profit-driven entities. All you need is some that have a grudge against your service for whatever silly reason.


So then sites that are using passports to avoid being abused just set the multiplier in their config file to 0.7 and they're done. The actual multiplier can be updated every so often and re-distributed ... no big deal as the operators who are accepting these passports already need to share blacklists and so on for the system to work.

You know, I think this gets to the core of my objection: why settle for a system that requires all this maintenance? For tx-in-tx you can set the discount to 0.5 on the assumption that if >50% of the hashing power is controlled you're screwed anyway. Done. If you're particular use allows for resale you will need to check that the secondary market hasn't crashed, but that's true regardless of how the sacrifices are created.

Ultimately I understand that passports probably don't really need iron-clad security, and do need lots of human intervention for other reasons. But the cost of getting the best security possible, short of just sending value to unspendable txouts, is pretty low, and it does make for shorter proofs than multiple-tx schemes. (if n > 2)

It's only real disadvantages is the need to ensure the signatures on the published sacrificial tx are valid, a potentially tricky bit of code, the fact that the initial announcement tx is (currently) non-standard, and the need to write a blockchain watching bot to recognize the sacrificial txs and broadcast them. (or add them to the local mempool and try to mine them yourself) I just don't see those three issues as major problems, and I'd much rather see your passports use the same system as fidelity bonded coin transfer services and whatever else people come up with so efforts can be focused on getting one solid system rather than a couple of incomplete ones.
1012  Bitcoin / Bitcoin Discussion / Re: SERIOUS VULNERABILITY related to accepting zero-confirmation transactions on: February 11, 2013, 11:33:54 AM
I've quite successfully double-spent non-final transactions by just putting a replacement in my wallet on a single well-connected node and waiting a few days.

But... doesn't that mean that people were treating a time-locked transaction as normal unconfirmed transaction? They should not be treated as the same thing, that's the actual problem.

That's exactly what people were doing, or really, what the software people were using was doing. Keep in mind that a non-time-locked transaction that depends on a time-locked transaction is effectively time-locked... fixing the issue that way isn't trivial and would have required a lot more careful testing to be absolutely sure it worked; I should know, I wrote a patch that did exactly that kind of analysis before writing a version of the nearly trivial fix that Gavin wrote. It took me a few days to be convinced, but I think he made the right choice. There are also other mostly theoretical vulnerabilities related to non-final transactions that have not been disclosed as far as I know. They are fixed by Gavin's patch and are not fixed by just handling non-final txs differently in the UI. They also can-not be used to steal funds.
1013  Bitcoin / Bitcoin Discussion / Re: SERIOUS VULNERABILITY related to accepting zero-confirmation transactions on: February 11, 2013, 09:27:47 AM
nLockTime isn't killed "for good", it's just considered non-standard so the default client won't relay or pay attention to it. Since the client already doesn't support replacement, this really is pretty harmless. A future client that does support replacement can add it back in after another conversation.

Specifically nLockTime is only considered non-standard when the transaction is not final yet. That is the locktime hasn't been reached, and the transaction is not eligible to be included in a block. Non-standard just means that nodes won't relay such transactions on the P2P network, but nothing stops you from saving the transaction yourself and broadcasting it when the locktime is reached and the transaction finalizes.

Such feature can be really useful. Think inheritance for instance. How would you implement inheritance in a "bank" which uses multi-signature? You die, all your money dies with you?
Even for password regeneration... suppose you run a multisig bank in BTC, and you don't really trust all your clients to never forget their passwords (people do often forget/lose their passwords, and I bet that in some jurisdictions you would be forced to give your client a way to retrieve their money even if he forgot his password). Using nLockTime would be just great for this. You periodically transfer the money to a cold storage address the bank fully controls. While the client is logging in, you periodically reverse that transaction and generates a new one. If your client ever forgets his password for good, you'd just need to positively identify him (docs etc) and ask him to wait like 3 months or something to have his coins back.

You can do that just fine even if non-final transactions are not-standard. Just create the transactions that send the money back to the client, set the nLockTime so that they will not be valid for 3 months, and then sign them. You can freely give the client copies of these transactions. As the three months approaches, move the coins to invalidate the txin's of the previous set of locked transactions and create new refund transactions. If the refund transactions are ever needed, just wait until they have finalized, and broadcast them on the P2P network as you would any other transaction.

Ultimately the thing is transaction replacement just doesn't work the way people want it to because you can always replace a transaction by finding a miner willing to ignore the non-final tx in their mempool and willing to mine another one for you. Additionally mempool contents don't last forever; I've quite successfully double-spent non-final transactions by just putting a replacement in my wallet on a single well-connected node and waiting a few days.
1014  Bitcoin / Development & Technical Discussion / Re: Creating Bitcoin passports using sacrifices on: February 11, 2013, 09:12:10 AM
Someone else should chime in with the math to work it out explicitly - I know way too little about probability math - but can you see how the probability aspect of it makes hash power much less important than the two-step publish commit?

Well, the assumption behind the 2-step process is that you can't predict who will mine a block a long way in the future. You could equally have your proof contain two transactions in two blocks and insist in the protocol that those blocks always be exactly 100 blocks apart and it'd work the same way, I'd think?

I sat down and (half) wrote a god-damn paper this weekend analyzing this stuff properly. I'll finish it up and make it public if anyone is actually interested, but the process absolutely convinced me that the two-step tx-in-a-tx solution is the only thing close to feasible that will keep a stable sacrifice market value. (without just sending coins to unspendable txouts of course)

First of all you have to assume that services will pop up to create these sacrifices for you. Thus you need to assume the sacrifice is being created by the entity with the largest single concentration of hashing power, and that's likely to be in the range of 10% to 30%.

Secondly because of that, for any mining-fee-based sequence, the first transaction is always nearly free. You can always mine it yourself by waiting a bit, and the only cost is the approximately 1% or 2% probability that the block gets orphaned and another miner gets the tx fee instead.

This means that asking for "two blocks n apart" is at best 50% efficient, because you have to assume the first block was self-mined. Secondly mining is a random process, so asking for exactly 1 block apart or 100 makes absolutely no difference.

It seems like the difficulty of obtaining 5-of-6 blocks in a row or similar is essentially equivalent - you have to dominate the network in either case, or be infeasibly lucky, but you can make a proof faster if you're trying to fill a majority of blocks in a span.

Sure, but this is a supply and demand problem. Let ω be the probability that the block a transaction is in is orphaned, and the transaction is mined by someone else. If we attempt to mine a transaction containing a fee of value V ourselves, the cost to us is then V*ω. Assuming that miners always build on the best known block and don't deliberately attempt to orphane blocks to collect the transactions within ω is probably approximately equal to the orphan rate itself, which appears to be around 1% or so. Also note that for a large miner ω is less than for a small miner.

Now if I control q of the total hashing power my expected number of blocks before I get n consecutive is ((1-q)^(-n)-1)/(1-q) For 10% I need 1 million attempts for n=6, however the number of attempts drops extremely quickly as q and n decrease. Basically that means if there exists q and n such that ((1-q)^(-n)-1)/(1-q)*ω < v/V, where v is the actual value of the sacrifice and V is the face value, you are better off attempting to mine the sacrifice yourself rather than buying it fairly. For q=25% and n=3 this is true for v=V. (remember that you can always finish the other blocks in a sacrifice by the conventional way) Allowing for n-of-m blocks just makes the problem worse, because it effectively increases the apparently hashing power available to the miner. Yet at the same time griefers can do a lot of damage by deliberately excluding your sacrifice transactions if n-of-m is not allowed, and additionally orphans quickly push up the cost of sacrifices. (about 10% extra for n=6 and 1% orphans)

Is this really an issue? Well, I think the the question really is why would you use a sacrifice method where the value is so sensitive to so many variables? In particularly, a method where the actual cost of the sacrifice is inversely and exponentially dependent on the size of the largest miner.

My later tx-in-a-tx proposal is far easier to reason about because the cost is simply bounded by the largest miners hashing power and has no dependence on difficult to measure values like orphan rates. It's just a much better idea, which might explain why it took me 6 more months to think of it...

If the node that's verifying can't see it in the mempool and it's not in a block, then presumably the first transaction could be double-spent. Nodes can sync to each others mempools these days using the mempool command, and I guess in future if we find a solution to zombie transactions living forever it'd be good to make nodes sync their mempools at startup. The current "solution" (I hesitate to use that word) is hardly convincing Smiley

But the tx-in-a-tx sacrifice isn't valid until the second tx, the one that actually sacrifices coins, is spent so the tx being in the mempool is a non-issue. Anyway sacrifices need to be provable to SPV nodes who don't care about the mempool.
1015  Bitcoin / Bitcoin Discussion / Re: SERIOUS VULNERABILITY related to accepting zero-confirmation transactions on: February 11, 2013, 05:37:29 AM
Gavin's pull request looks like it is disabling a feature needed by transaction replacement in favor of assisting those who are swapping tangible assets for unconfirmed transactions without waiting the lock time.

Transaction replacement is currently disabled. Multiple parties discussed this issue extensively, and the consensus was that the minor utility of being able to broadcast non-final transactions was not important enough given the drawbacks.

True transaction replacement probably needs to be done as a separate P2P layer, and in any case always has the problem that there is nothing stopping miners from mining older versions of the transaction that they know about.

Satoshi's wait 10 seconds for unconfirmed transactions to listen for a double spend applies to transactions that can feasibly make it to a block. Not for ones with lock times decades in the future?!?

That's the main reason this was considered a serious issue: code was doing exactly that and not taking into account that transactions can be made invalid until thousands of years in the future. (Maximum nLockTime setting locks the transaction until July 18, 11515) There are lots of UI's that make no distinction between transactions that can be mined now and ones that can't. The same applied to services; I did after all steal 5BTC from blockchain.info exploiting this oversight.

It also seems like no miner made the wrong choose for greed rather they took a committed transaction and put it in a block instead of its uncommitted locked version? Again am I wrong?

A non-final transaction can not be included in a block until it finalizes and thus miners have nothing to do with the issue.
1016  Bitcoin / Project Development / Re: [BOUNTY] 200 BTC for lightweight colored coin client(s) on: February 08, 2013, 12:25:05 PM
Have you seen my post on fidelitybonds and contracts? Fidelity bonds are a close cousin to colored coins, with the exception that every txout is associated with a contract. For fidelity bonds this is the contract describing how the holder promises they will act, but for colored coins the "contract" could simply be the identifier of the color; the colored coin protocol for fidelity bonds uses the least significant bits of the txout value to separate contract and change outputs, and thus allows for arbitrary precision. Currently there is just contract and change, but the idea that can easily be extended to multiple different colors in a transaction by using more bits if required, allowing fidelity bonds to be traded for colored coins representing other assets in the future.

Contracts also allow for off-chain trading by inserting a special contract stating that some trusted ledger holder, possibly kept trustworthy by a fidelity bond, will maintain balances for the coins beyond some point. Of course, obviously the problem of trusting some central ledger is exactly the problem that colored coins is trying to solve, but there is room for both solutions, especially as demand on the limited block space becomes high. Trusted ledgers can after all still transfer some of the balance they are keeping a ledger of back onto the block chain, allowing you to get your balance back onto the block chain. Similarly the trusted ledger might actually be a merge-mined alt-chain using cross-chain coin trading. (reminds me, I should ensure the contracts protocol is compatible with those transactions)

Of course, you're pretty far along with colored coins with your existing protocol, but I thought you might be interested in what I'm working on. Also, thanks for everyone who developed the idea of colored coins in the first place; the fidelity bonds and contracts proposal wouldn't have happened without your ideas.
1017  Bitcoin / Development & Technical Discussion / Re: Creating Bitcoin passports using sacrifices on: February 06, 2013, 07:27:23 AM
Hmm. I'm not sure this undermines my argument Smiley I never heard of a "bonded guard" and I think I read more than the average person.

Ha. Might be a geographic thing; I'm in Canada. Of course, security is a niche field ultimately...

It's not just about incentive, what happens when you get a near orphan by pure luck? We already see transactions getting passed up by miners who missed them for whatever reason all the time. Yet as gaps increase, it just makes it easier for a relatively low hash rate miner to just wait until they get lucky, while only risking the small chance that their block is orphaned and another miner picks up the tx for the fee.

I didn't follow this, sorry Sad How does a low hash rate miner get to mine several blocks in a short timeframe, except through such wild luck that it's not worth worrying about?

Someone else should chime in with the math to work it out explicitly - I know way too little about probability math - but can you see how the probability aspect of it makes hash power much less important than the two-step publish commit?

How do you prove that a tx existed in the memory pool after the fact?

It's either in the memory pool and waiting to be included (in which case a verifying node can see it), or it's fallen from the pool into the chain, in which case the verifying node can see it or be given it.

Right, but if that node trying to verify the sacrifice didn't see it in the mempool, there is no way to prove it ever was in the mempool after the fact.
1018  Bitcoin / Development & Technical Discussion / Re: Creating Bitcoin passports using sacrifices on: February 05, 2013, 06:16:26 AM
I posted my first draft of something approaching a proper tech spec for this fidelity bond/contract stuff: https://github.com/petertodd/trustbits/blob/master/fidelitybond.md

I'm not really happy with it either, but I think it's better than fidelity bonds: my issue with this name is that both "fidelity" and "bond" (as a financial concept) are not terms commonly used in English. Fidelity tends to have connotations to sexual loyalty even though it can be used more generally, so it could be confusing to have it crop up in another context. Also whilst anyone who is interested in finance knows all about bonds, most people have never actually experienced one themselves.

Keep in mind that the term "fidelity bond" isn't just a finance term, it also crops up in stuff like security guard jobs. That's actually the first place I heard it, talking to someone explaining how they were now a bonded guard. That said, more specifically for what you are talking about re: wiki and similar services, I think "passport" is reasonable enough. It's also good that it has the real-world connotation that acceptance of a passport is not absolute. Similarly what is blacklistable behavior is ultimately a human criteria, and you'll find that rough consensus emerges among groups of mutually trusted services, but a passport revoked by Wikipedia may not be considered revoked by 4chan.

Fidelity bonded banking has the advantage that the contracts can be made machine readable. Of course, equally, the disadvantage is that the contracts are machine readable... One little mistake...

As for the term "sacrifice", I think this is a really good one for describing the nitty gritty of any of these protocols; I was racking my brain trying to come up with a decent term and I think sacrifice solves that problem.

Seeing as the transactions are all-fee anyway, miners have no incentive to not include them (unless Bitcoin turns into a network for high value transactions only of course). I'd think you could set a threshold, say 4 of 6 blocks must include them, or something like that. The right gap thresholds could be determined empirically.

It's not just about incentive, what happens when you get a near orphan by pure luck? We already see transactions getting passed up by miners who missed them for whatever reason all the time. Yet as gaps increase, it just makes it easier for a relatively low hash rate miner to just wait until they get lucky, while only risking the small chance that their block is orphaned and another miner picks up the tx for the fee.

Besides, embedding a transaction inside another transaction seems superfluous (the second tx could just be broadcast and sit in the memory pool for a while).

How do you prove that a tx existed in the memory pool after the fact?

Post-pruning size with OP_FALSE outputs is zero, is it not?

I'm pretty sure any script ending in OP_FALSE is guaranteed unspendable. Even the OP_IF stuff has a check (via vfExec) to ensure that an IF block is terminated; there are no op-codes that mark a script as valid and terminate execution. OP_RETURN at the end of the script would work too I think.

I'm not worried about the size. As I said, all-fee transactions should be completely prunable away.

It's not just blockchain size, it's proof size. You have to be able to prove to a SPV client that the transactions exist, so you have to hand them the transactions and the merkle path to the block header. In addition a malicious miner can currently pad transactions to a maximum of 10KiB scriptSigs.

You don't need a whole lot of blocks, just a few would be sufficient, so gaps are OK. I'm not worried about miners who manage to mine the majority of all blocks within the consecutive-blocks period because if that could happen we'd have a 51%+ attacker, and then we'd all have bigger problems than this.

Again, you don't have to have 51%, you just need to get lucky once every few months.

I'm not sure minimum fees will be much larger in future. That's certainly not a consensus anyway. I see fees being similar, lower or even being zero for most casual users, that could happen in a world where miners are funded through assurance contracts of participants who need the security (merchants, exchanges, traders etc).

Well, you're assuming the block size will be lifted which I'm dead set against.

Quote
The problem that I have with fee values is that the purchasing power with Bitcoin is not fixed - a passport created 3 years ago might have needed to burn 100's or 1000's of BTC to claim legitimacy; requiring any application to have some sort of exchange rate lookup as part of the passport application.

The app that checks the passports needs the block headers anyway. It can always record spot rates along side its storage of the headers. There's no trust-free way to bootstrap that, but exchange rates are a matter of public record and would be hard to forge in any significant way without someone noticing that it's historically inconsistent.

Being able to sell part of the cost of an identity, as you suggest, fixes the problem of fee values just fine: sell off the part of the fee that you don't need. It's a lot nicer than the huge pain of trying to record spot rates, although having said that you will need some estimate of the liquidity and supply and demand for the current market for fidelity bonds for any given application.


One issue with making passports smart property that just occurred to me is that it'd increase the requirements to having a copy of the utxo set because otherwise you couldn't know if the control output on the tx you were given was already spent or not. So you'd have to either run a full node or extend the p2p protocol so membership in the utxo set could be remotely tested by thin clients.

Well, I think this touches on one of the reasons why I'm 100% against raising the 1MiB block limit, especially with the sort of floating limit that Gavin and others have proposed: you can't have really trustworthy fidelity bonded transaction processors, and unbounded block sizes at the same time. It's in miner's incentives to do what they can to undermine the off-chain transactions to try to get as many transactions as they can for themselves, and just by increasing the block-size they achieve that goal by further centralizing who can verify the utxo set.

Keep in mind it's not even just a total blockchain size issue, it's a bandwidth issue. 1MiB/10minutes is just 2KiB/s, small enough to keep up with even on highly bandwidth constrained nodes stuck behind restrictive firewalls in unfriendly countries. A node operating under these conditions may never need to actually make an on-chain transaction, but they will be able to easily keep track of fund movements for the fidelity-bonded transaction processors they are depending on to move their Bitcoins, and they'll be able to keep track of those movements with a much higher degree of certainty than with any of the utxo proposals, especially because that one single node, which might be the only connection to the blockchain for a big area, can redistribute everything every other node in that area needs to know. And again, in an unbounded blocksize world miners have every reason to sabotage UTXO mechanisms if they'll stop off-chain transaction processors from operating successfully.

I'll write more on this later, but increasing the block size is madness. I thought people were still taught how ugly O(n) and O(n^2) scaling was in school...
1019  Bitcoin / Development & Technical Discussion / Re: what is -checkblocks for, and why does it default so high? on: February 05, 2013, 01:53:42 AM
FWIW the default has been reduced to 288 blocks (2 days) for 0.8: https://github.com/bitcoin/bitcoin/pull/2222/files
1020  Bitcoin / Development & Technical Discussion / Re: Odd testnet block on: February 02, 2013, 09:47:55 PM
Two things are going on: block explorer does t understand multisig transactions, which is where the funds game from. Additionally one of the tx outputs has an empty scriptpubkey, which means there are no conditions on spending it, and thus anyone can spend it by providing a scriptsig that evaluates to true such as simply op_true by itself. Again block explorer does t understand that as an address.
Pages: « 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 [51] 52 53 54 55 56 57 58 59 60 61 62 »
Powered by MySQL Powered by PHP Powered by SMF 1.1.19 | SMF © 2006-2009, Simple Machines Valid XHTML 1.0! Valid CSS!